All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI
Written By:
Professor Simon Chesterman
David Marshall Professor and Vice Provost (Educational Innovation), National University of Singapore
Dean of NUS College
Senior Director of AI Governance, AI Singapore
There is a rich literature on the challenges that AI poses to the legal order. But to what extent might such systems also offer part of the solution? China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
The judge’s robes are a deep black, though subtle touches of colour complement the national emblem dominating the courtroom wall. Red symbolizes revolution; golden stars rising over the Tiananmen Gate signify the unity of the people under the Party’s leadership. Until the turn of the century, judicial officers wore military uniforms — the Supreme People’s Court sits at the apex of the legal system but below the Communist Party. By appearance, this judge would not have even been in law school back then. Appearances can be deceiving, of course, since her generic face and simple hairstyle were designed by computer scientists. The avatar’s lips move as the synthesized voice asks in Mandarin: ‘Does the defendant have any objection to the nature of the judicial blockchain evidence submitted by the plaintiff?’
‘No objection,’ the human defendant responds.
The video of the pre-trial meeting at Hangzhou’s Internet Court, released in late 2019, is part propaganda, part evangelism. Courts were identified as one of the areas ripe for improvement in China’s New Generation Artificial Intelligence Development Plan. In a section on social governance [社会治理], it called for the creation of ‘smart courts’ [智慧法庭]. This builds on moves to digitize and standardize litigation across the country, with experiments like those in Hangzhou paving the way for further advances. The avatar can handle online trade disputes, copyright cases, and e-commerce product liability claims. Hangzhou was chosen because it is the home of Alibaba, enabling integration with trading platforms like Taobao for the purpose of evidence gathering as well as ‘technical support’.
Online dispute resolution is not new; eBay has long used it to help parties settle tens of millions of disputes annually. What is interesting in the Chinese context is the extent to which this embrace of technology is permeating the court hierarchy not just in mediating small claims but all the way up to the Supreme People’s Court itself.
The Judicial Accountability System [司法责任制] began as a campaign to promote consistency in judgments. Past efforts had relied on reviews by superiors, but this was deemed impractical and undermined the authority of the judge who heard the case. AI systems now push similar cases up to a judge prior to a decision, flagging an ‘abnormal judgment warning’ if a proposed outcome departs significantly from past data. This is part of a suite of technologies that have been adopted, influenced both by the supply of technology companies in China and the demands of a complex and developing legal system. The Wujiang District of Suzhou has trialled a ‘one-click’ summary judgment process, automatically generating proposed grounds of decision complete with sentence. Other courts are following suit.
Singapore’s Chief Justice, Sundaresh Menon, has said that developments in China are making ‘machine-assisted court adjudication a reality’. At the same time, he noted, the use of AI within the justice system gives rise to a ‘unique set of ethical concerns, including those relating to credibility, transparency and accountability’. To this one might add considerations of equity, since the drive towards greater automation is being dominated by deep-pocketed clients and ever-closer ties to technology companies, with uncertain consequences for the future administration of justice.
The impact of AI on the practice of law goes well beyond the scope of this article. It considers the narrower question of whether and how AI systems themselves could support regulation of AI. Insofar as gaps are revealed by the rise of fast, autonomous, and opaque systems, do new rules and new institutions need to be supplemented by new actors in the form of AI regulators and judges?
Section one briefly sketches out past efforts to automate the law. Though AI judges are the most provocative example, many areas of legal practice and regulation have long been seen as ripe for automation. Despite successes in simple and repetitive tasks, these efforts tended to founder because they were premised on a misconception of law as the mere application of clear rules to agreed facts. In practice, the rules are rarely so clear and disagreement over facts explains a significant portion of legal disputes.
A more promising approach has been to abandon the goal of thinking ‘like a lawyer’ and approach legal analysis not as the application of rules to facts but as data. Section two discusses this bottom-up approach to legal analytics, which reveals distinct limitations that are not technical so much as social and political. Even though AI systems are getting ever better at forecasting regulatory outcomes, embracing this across the legal system would represent a fundamental shift from making decisions to predicting them.
Even if regulation by AI generally were possible, then, it is not desirable. Can a special case be made, however, for the regulation of AI systems themselves? If the objection to AI regulators and judges is their inability to appreciate the social context within which legal determinations take place, or legitimacy questions about humans having their fate determined by statistics, one response is that this need not apply to regulation of AI. Section three discusses how systems could be made to be self-policing. As we have seen in other areas, for example, one of the virtues of AI is relative transparency in that simulations can be run with slight variations to look for bias. And, unlike humans, a machine is far more likely to admit to its errors.
To the extent that they increase the transparency and human control of AI systems, these developments may be useful. But self-regulation by AI ultimately confronts similar limitations to self-regulation by industry. Though helpful in establishing standards and best practices, red lines will need to be drawn and ultimate oversight conducted by politically legitimate and accountable actors. And, if it is impermissible to outsource inherently governmental functions to fast, autonomous, and opaque machines, enforcement of that prohibition cannot itself be left to those same machines.
This is the introduction to a forthcoming article. The full draft is available on SSRN.com here.