A desktop computer running software created in partnership with the Chinese Academy of Sciences is located in a district prosecution office in Shanghai’s Pudong neighborhood, which is the busiest in China and handles the kind of caseload that would be typical in a metropolis with 25 million residents. The program’s name approximately translates to “AI Prosecutor.” Its creators claim that it has a 97% accuracy rate when filing criminal charges based on a case description.
The eight most prevalent crimes in Shanghai are credit card fraud, theft, reckless driving, and the uniquely Chinese offense of “picking quarrels and provoking trouble.” It has been trained on over 17,000 cases and is capable of identifying and filing charges for these crimes. This is the world’s most aggressive use of AI in a criminal judicial system. Crucially, it is not a system that has taken the place of prosecutors. It runs beside them.
| Category | Detail |
|---|---|
| China’s AI Prosecutor | Developed at the Shanghai Pudong People’s Procuratorate with the Chinese Academy of Sciences; files charges with reported 97% accuracy; trained on 17,000+ cases between 2015 and 2020; runs on a standard desktop computer |
| Crimes Currently Covered | Eight most common Shanghai crimes: credit card fraud, gambling, dangerous driving, intentional injury, obstructing official duties, theft, fraud, and “picking quarrels and provoking trouble” |
| System 206 Context | Used in Chinese courts since January 2019; evaluates evidence strength, conditions for arrest, and public danger posed by suspects; the AI Prosecutor extends this tool’s capabilities into charging decisions |
| Corporate Legal AI Adoption | Generative AI use in corporate legal departments more than doubled from 23% to 52% between 2024 and 2025; task focus remains contract review, legal research, and document drafting |
| Legal Employment Impact | 2026 analysis shows approximately 6.4% increase in legal sector employment despite AI adoption — suggesting AI is reshaping rather than replacing legal work; new roles emerging around AI oversight and verification |
| EU AI Act (2026) | EU AI Act classifies many legal AI tools as “high-risk,” requiring human oversight, transparency in decision-making, and documented accountability frameworks; applicable from 2026 |
| Documented Problems | Indian courts flagged an “alarming trend” of lawyers using AI to draft petitions citing fictitious judgments — the hallucination problem; ethical concerns include bias reinforcement, lack of transparency, and accountability gaps |
| Further Reference | Legal AI accountability research at the AIAAIC Repository |
The story that has been spreading about AI in legal systems frequently surpasses the documented truth by a number of orders of magnitude. No nation has used AI to replace its judges, attorneys, or prosecutors. Since no such replacement has taken place, no jurisdiction has released twelve months’ worth of crime statistics showing the effects of such a replacement. What is actually occurring is more incremental, and it is genuinely significant.
Between 2024 and 2025, generative AI deployment in corporate legal departments more than doubled, from 23% to 52% of surveyed organizations. Routine work including contract evaluation, legal research, document drafting, and due diligence are being dominated by AI. These are the locations where pattern recognition offers real advantages over human attention spans and where hourly charges mount up.
The system in China is the anomaly that gives the overall picture a reasonable appearance. Since January 2019, Chinese courts have employed System 206, the forerunner of the AI Prosecutor, to assess public risk and the quality of the evidence. In terms of functioning on that additional automation threshold, the charging-decision extension created at the Shanghai Pudong Procuratorate is truly novel.
According to Professor Shi Yong of the big data and knowledge management laboratory at the Chinese Academy of Sciences, the system can “to a certain extent” replace prosecutors in the decision-making process. It is a formulation that has been carefully hedged. Chinese prosecutors themselves have brought up the accountability issue, which does not seem to have been adequately addressed. The issue is who is accountable when the AI files the incorrect charge or when the 3% error rate captures a real person.

The European Union has adopted a different approach. Many legal AI tools are categorized as “high-risk” under the EU AI Act, which went into effect gradually until 2026. This designation necessitates human oversight, written transparency regarding the system’s decision-making process, and explicit accountability structures.
In practical terms, this means that any AI system operating in the EU that makes or supports legal judgments needs to have a person involved and a paper trail that the human can follow. It will take years to provide an honest response to the question of whether this strategy outperforms China’s more aggressive deployment, and it will require empirical facts rather than regulatory preference.
Meanwhile, the shortcomings are immediately apparent. The hallucination issue that has plagued general-purpose AI since ChatGPT’s widespread adoption has now surfaced in filings that carry real consequences for real people, according to Indian courts, which have noted a “alarming trend” of attorneys filing petitions that cite fictional legal judgments. Attorneys have been penalized by US federal courts for similar mistakes.
Employment in the legal sector increased by 6.4% through 2026, indicating that AI’s impact on the field is altering rather than eliminating it, however this reshaping is not happening smoothly. It is creating new types of professional malfeasance as well as real efficiency improvements. In all honesty, the people who are most impacted—defendants, plaintiffs, and everyone on the receiving end of a legal system—have the least visibility into how any of it operates, while oversight is catching up slowly. which could be the most significant detail in the whole narrative.