It is not all upside for automated justice. There are a number of well-rehearsed arguments against automation in general (for example, job losses and loss of skills) and concerns about the careless deployment of generative AI (for example, intellectual property infringement and privacy concerns), which are important but outside the scope of this article. Instead, this article considers the immediate practical barriers and ethical considerations specific to automated justice and AADR.
3.1 Capability Restraints
First, there are concerns about the limitations of the existing technology. There are some who argue that AI is not yet ready to make decisions in real-world situations because it is not yet ready to emphasise empathy, ethics or morality and often misses the big picture. These qualities are important to dispute resolution. It is also not clear that AI can properly determine whether an action is "reasonable" or to discern the credibility of factual evidence. These limitations could result in AI improperly tending towards absolutism when predicting how a neutral third party would decide on an issue, or missing the nuances and complexity of human interaction. The absence of emotional intelligence will be a problem for many (thought not all) disputes.
In addition, novel interpretation or analysis may be required when applying the law to new fact patterns. It may also be required where the law does not neatly apply to a fact pattern. These are situations which call for imagination and creativity, and it is not evident that existing AI models are capable of exhibiting these qualities either.
Underpinning some of this is the applicability of morality to the administration of justice, which is a tricky but undeniable feature of dispute resolution. Even if an AI model is able to properly employ morality, should the AI algorithm be morally neutral and blindly apply the law, or should it exhibit morality and if so, whose morals? Is it possible for an AI model to be morally neutral? Some say not. Is the perfect AI judge one that matches a human judge? These are enormous ethical questions which ought to be carefully considered before an AI model is deployed.
In any event, complex disputes will also be technologically difficult for AI models to resolve, due to their multi-faceted nature. For example, some disputes will be fact heavy and require analysis of various areas of law. The more complex a dispute, the less effective an AI model will be at producing an accurate prediction or output. Further, more extensive training of the model would be required, increasing costs.
3.2 Due Process
Next, parties to a dispute must trust that the dispute has been dealt with in a just, impartial and fair way with due process.
It is a well-rehearsed criticism of generative AI that the models are a black box, which means it is very difficult to interrogate how the model has produced its output. The parties will have limited ways of dispelling doubts about, or discovering, any bias or error within the system. This will be particularly problematic for an unsuccessful party in circumstances where the decision is a binding one, as with expert determination. If parties distrust the decision or the accuracy of outputs, it may undermine the administration of justice with potentially serious consequences for society.
Another criticism of generative AI is that it is prone to hallucinations or error, and bias. Much of this comes down to the quality of the training data and the structure of the model, which can be alleviated by continued testing of the AI model and using AI for appropriate situations. These criticisms are elevated where the decision of the AI model cannot be interrogated.
There are further negative impacts on the administration of justice that arise from the unexplainability of AI's output. First, without reasoned decisions there will be fewer precedents, which will impact the development of the law in common law jurisdictions such as England and Wales. Second, it will not be possible to appeal decisions of AI in the usual way, as an appeal generally depends on identifying faults in the reasoning. For most forms of ADR, the former issue is not problematic as reasoned decisions are not public. Appeal is also, generally, not a prominent feature of ADR – and may be preserved in circumstances where the output of AADR is manifestly absurd or unreasonable.
Finally, there is the risk of 'automation bias', namely that people are more likely to believe the output of a machine, and more likely to forgive when it gets something wrong. Consequently, AI is more likely to be used for an unsuitable purpose.