Analysis: while other jurisdictions are trialing law by artificial intelligence, there are many reasons why this might not be a good idea

The law is supposed to be applied impartially and objectively, 'without fear or favour'. Some say, then, what better way to achieve this than to use a computer program? Judges can be replaced by so-called artificial intelligence software, which doesn’t need a lunch break or want a pay rise, and justice can be applied more quickly and efficiently.

This is already a reality in some legal systems, with China in the lead, Pakistan taking an interest, and closer to home, Estonia supposed to use AI to deal with small claims cases (although initial reporting seems to have been over-enthusiastic). Will we therefore see ‘robot judges’ in Irish courtrooms in the future?

Artificial intelligence really isn’t that smart

There are four principal reasons why this might not be a good idea. The first is that AI is generally either an ‘expert system’ or ‘machine learning’. Expert systems involve encoding rules into a decision tree in software and had their heyday in law (and many other domains) in the 1980s but ultimately proved unable to deliver good results on a large scale.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ 2FM's Dave Fanning, UCC Professor of Artificial Intelligence Barry O'Sullivan on tech ethics and the future of AI

Machine learning is essentially sophisticated statistical modelling – often quite powerful but at the end of the day, no more than a very educated guess. One of the strengths of this approach to AI is that it can be creative and insightful in ways that the human mind cannot, finding correlations and patterns in data that we don’t have the capacity to calculate.

But one of its weaknesses is that it fails in ways that are different to people, reaching conclusions that are obviously incorrect. In one notable example, AI was tricked into recognising a turtle as a gun. Facial recognition often has issues correctly identifying women, children, and those with dark skin, which could mean (for example) that the computer would place someone at a crime scene when they were not there. It would be difficult to be confident in a legal system that produced outcomes that were clearly incorrect but also very difficult to review, as machine learning reasoning is not transparent or comprehensible by humans.

The problem with historical biases

Machine learning systems rely on historical datasets. In crime and law, these will often contain bias and prejudice and marginalised communities will often feature more in records of arrests and convictions. As a result, an AI system might draw the unwarranted conclusion that people from particular backgrounds are more likely to be guilty.

Read more: How AI has been portrayed in movies from past to present

A prominent example of this is the COMPAS system, which is used by US judges to make decisions on granting bail and sentencing. A study claimed that it generated 'false positives' for people of colour and ‘false negatives’ for white people: in other words, it suggested that people of colour would re-offend when they did not in fact do so, and suggested that white people would not re-offend when they did in fact do so. (The developer of the system challenges those claims.)

Who actually writes this software?

It is not clear that legal rules can be reliably converted into software rules as individuals will interpret the same rule in different ways. When 52 programmers were assigned the task of automating the enforcement of speed limits, the programs that they wrote issued very different numbers of tickets for the same sample data.

Individual judges may have different interpretations of the law, but they do so in public and are subject to being overturned on appeal, which should reduce the amount of variation over time (at least in theory). But that may be very difficult to discover and correct if a programmer is too strict or too lenient in their implementation of a rule.

Automated government systems fail at a scale and speed that is very difficult to recover from, even if those in charge don’t try to cover up the problems. The Dutch government used an automated system (SyRI) to detect benefits fraud, which illegally used dual nationality as a trigger for audit and destroyed the lives of many who were falsely accused.

We need your consent to load this YouTube contentWe use YouTube to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Brainstorm, will a robot take your job?

The Australian 'Online Compliance Intervention' scheme is another example. It was used to automatically assess debts from recipients of Centrelink social welfare payments, commonly known as ‘Robodebt’, and also over-stepped its bounds. It negatively affected hundreds of thousands and is now the subject of a Royal Commission of enquiry.

Judges do more than judge

Finally, judging is not all that judges do. They have many other roles in the legal system, such as managing a courtroom, a caseload, and a team of staff, and those would be even more difficult to replace with software programs.

Of course, these might not be sufficient reasons to stop policy-makers from taking an interest in these technologies. ‘Robot judges’ might not be a good idea, but solutions might be found to these problems, or decision-makers might plow ahead regardless.

Ireland is still in the early stages of an overdue process of digitising the court system so we are likely to see human judges remain a feature of Irish courtrooms for quite some time to come. But other jurisdictions might experiment with them and we can learn valuable lessons from their experiences. Let us hope that we can do so before excessive automation corrodes public trust in our legal system and in our courts.


The views expressed here are those of the author and do not represent or reflect the views of RTÉ