Is AI justice on the way? AADR: Automated alternative dispute resolution

Is AI justice on the way? AADR: Automated alternative dispute resolution

Overview

An abridged version of this article was originally published in Artificial Lawyer here.

The development of artificial intelligence (AI) has been in progress for decades. The advent of large language models, underpinned by neural networks and deep learning, has rapidly and vastly expanded potential use cases for AI and may even have triggered the next industrial revolution. Automation is likely to impact many, if not most, professions and the administration of justice will not be immune to this. A key question for society is how to best implement this automation.

The administration of justice is both a serious and consequential function of the State as it can occasion the deployment of coercive power over individuals and impact upon their fundamental rights. Consequently, it is important for society to have trust in that administration, absent which society will not submit to it. For these reasons, the automation of justice will be a very complex undertaking.

As an initial foray into this largely unmapped sphere, this article explores, at a high level, how existing AI models could automate alternative dispute resolution (ADR) for commercial disputes and considers some practical and ethical barriers to doing so. This article concludes that in AI's present state, the automation of justice should be limited in its scope and application.

ADR – CAN IT BE AUTOMATED?

ADR is a voluntary and flexible form of dispute resolution, giving considerable freedom to the parties to a dispute to set the rules. The regulation of ADR in commercial disputes, and who can act as the neutral third party, is limited in the UK. That is not to say that it is totally unregulated. For example, existing regulations such as GDPR will apply to neutral third parties and practitioners engaged with ADR. Further, the fast-changing regulatory landscape is likely to impact on the use of AI in all spheres in the not-too-distant future – for example, the EU's AI Act. But, it is generally for the parties to contractually agree who they wish to facilitate their ADR process, and how it should be facilitated.

It has been argued that AI will soon be capable of predicting the decisions of a neutral third party (e.g., a judge or a mediator).[1] This opens the door to automated justice and automated alternative dispute resolution ("AADR"). Before embarking on the benefits and drawbacks of AADR, this article will first outline some forms of ADR and how each may be automated.

1. Negotiation

This is not strictly a form of ADR but is mentioned here as a potential bridge to AADR. In short, the parties to a dispute can negotiate to resolve a dispute before they require any neutral third party assistance. AI software can interrogate arguments and evidence for gaps and weaknesses and could predict the outcome of a dispute – namely, what a judge would be likely to decide. These use cases may motivate parties to settle their differences at an earlier stage, or at least narrow the issues in dispute. In light of the wholly voluntary and private nature of this use case, many of the practical and ethical issues explored in this article are less relevant, provided the AI is deployed with care.

2. Mediation

Mediation is arguably the most flexible form of ADR. Generally, a neutral third party (the mediator), appointed by the parties, shuttles between the opposing sides to a dispute in an attempt to resolve the dispute voluntarily or to narrow the issues in dispute. This role could be undertaken by AI, which could test the arguments of each side for omissions or logical flaws and attempt to bring the positions of the parties closer together. The AI could also provide a settlement proposal based on what it predicts would be the most acceptable to both parties. Although a mediation may be used for a broad range of disputes with a range of complexity, the role of a mediator is facilitative rather than determinative and so extensive training of the model on subject matter data would not necessarily be required. Any decision of a mediator is non-binding on the parties, which again tempers some of the challenges to implementation.

3. Expert Determination

Expert determination is a form of dispute resolution whereby parties to a contract agree to refer a dispute to an expert, usually in respect of a particular issue. For example, disputes as to a valuation, or technical issues with limited disputes as to fact. The AI model would be trained on data in respect of the particular area of expertise and then predict the expert's decision. The decision of an expert is usually binding on the parties with limited recourse to an appeal, and so automation of this form of ADR will give rise to additional practical and ethical hurdles.

4. Arbitration

Finally – perhaps at the other end of the scale and at times more akin to court litigation – is arbitration. This can be a very flexible form of ADR, although many parties decide to incorporate the rules and procedure of an arbitral institution. We could soon see an arbitral institution which permits AI arbitrators, and the Arbitration Act 1996, perhaps unsurprisingly, does not appear to prohibit (or even anticipate) this. As with expert determination, the decision of an arbitrator or arbitral tribunal is usually binding with limited recourse to appeal. Further, an arbitrator will often have to determine a panoply of diverse and complex issues, including jurisdictional questions, costs awards and arguments about due process – and that is before getting to the potentially expansive breadth and complexity of the subject matter of the dispute. Consequently, arbitral proceedings are technologically the most difficult form of ADR to automate.

ADR – THE BENEFITS OF AUTOMATION

There are several potential benefits which may arise from AADR, aside from the benefits that arise from increased uptake of ADR in general (for example, relieving the burden on the court system and saving parties time and costs by avoiding court litigation). Many of these apply equally to automated justice in general.

First, AADR is likely to be significantly quicker and potentially cheaper than ADR – especially when indirect costs such as management time and legal costs are included. Not only would this result in a financial saving for the parties, but access to justice will be enhanced, particularly for smaller businesses which may not otherwise have the time or funds to pursue litigation or ADR.

Second, an AI model could result in more unbiased and fairer outcomes. Unconscious bias, the quality of advocacy, and even the mood of the neutral third party (for example, whether a decision is made before or after lunch), may improperly impact upon decision making. Provided the AI has been trained on high quality data and efforts are made to limit structural bias, it may be that a dispassionate AI can resolve disputes in a fairer way. The parties may also be more trusting of a dispassionate decision (although note the risk of automation bias discussed below).

Finally, the impersonal nature of AADR – without the need for face-to-face advocacy or conflict – may reduce tensions. Removing the emotional element from a dispute can propel parties towards a settlement. These benefits may preserve ongoing business relationships and bring additional financial benefits.

WHAT'S THE CATCH?

It is not all upside for automated justice. There are a number of well-rehearsed arguments against automation in general (for example, job losses and loss of skills) and concerns about the careless deployment of generative AI (for example, intellectual property infringement and privacy concerns), which are important but outside the scope of this article. Instead, this article considers the immediate practical barriers and ethical considerations specific to automated justice and AADR.

3.1 Capability Restraints

First, there are concerns about the limitations of the existing technology. There are some who argue that AI is not yet ready to make decisions in real-world situations because it is not yet ready to emphasise empathy, ethics or morality and often misses the big picture.[2] These qualities are important to dispute resolution. It is also not clear that AI can properly determine whether an action is "reasonable" or to discern the credibility of factual evidence. These limitations could result in AI improperly tending towards absolutism when predicting how a neutral third party would decide on an issue, or missing the nuances and complexity of human interaction. The absence of emotional intelligence will be a problem for many (thought not all) disputes.

In addition, novel interpretation or analysis may be required when applying the law to new fact patterns.[3] It may also be required where the law does not neatly apply to a fact pattern. These are situations which call for imagination and creativity, and it is not evident that existing AI models are capable of exhibiting these qualities either.

Underpinning some of this is the applicability of morality to the administration of justice, which is a tricky but undeniable feature of dispute resolution. Even if an AI model is able to properly employ morality, should the AI algorithm be morally neutral and blindly apply the law, or should it exhibit morality and if so, whose morals? Is it possible for an AI model to be morally neutral? Some say not.[4] Is the perfect AI judge one that matches a human judge? These are enormous ethical questions which ought to be carefully considered before an AI model is deployed.

In any event, complex disputes will also be technologically difficult for AI models to resolve, due to their multi-faceted nature. For example, some disputes will be fact heavy and require analysis of various areas of law. The more complex a dispute, the less effective an AI model will be at producing an accurate prediction or output. Further, more extensive training of the model would be required, increasing costs.

3.2  Due Process

Next, parties to a dispute must trust that the dispute has been dealt with in a just, impartial and fair way with due process.

It is a well-rehearsed criticism of generative AI that the models are a black box, which means it is very difficult to interrogate how the model has produced its output. The parties will have limited ways of dispelling doubts about, or discovering, any bias or error within the system. This will be particularly problematic for an unsuccessful party in circumstances where the decision is a binding one, as with expert determination. If parties distrust the decision or the accuracy of outputs, it may undermine the administration of justice with potentially serious consequences for society.

Another criticism of generative AI is that it is prone to hallucinations or error, and bias. Much of this comes down to the quality of the training data and the structure of the model, which can be alleviated by continued testing of the AI model and using AI for appropriate situations. These criticisms are elevated where the decision of the AI model cannot be interrogated.

There are further negative impacts on the administration of justice that arise from the unexplainability of AI's output. First, without reasoned decisions there will be fewer precedents, which will impact the development of the law in common law jurisdictions such as England and Wales. Second, it will not be possible to appeal decisions of AI in the usual way, as an appeal generally depends on identifying faults in the reasoning. For most forms of ADR, the former issue is not problematic as reasoned decisions are not public. Appeal is also, generally, not a prominent feature of ADR – and may be preserved in circumstances where the output of AADR is manifestly absurd or unreasonable.

Finally, there is the risk of 'automation bias', namely that people are more likely to believe the output of a machine, and more likely to forgive when it gets something wrong. Consequently, AI is more likely to be used for an unsuitable purpose.[5]

SHOULD WE AUTOMATE ADR OR NOT?

ADR is a prime candidate when it comes to automating justice because it is (normally) voluntary, requiring the consent of the parties before engaging with it. In the few circumstances where engaging with ADR is not voluntary, it is almost certainly up to the parties whether or not to use an AI model as the neutral third party. So, the ethical concerns which accompany the coercive use of automated decision-making over an individual's fundamental rights are not generally applicable. However, other concerns remain.

First, that AI cannot currently exhibit various human qualities including emotional intelligence, empathy and creativity will be a significant bar to the automation of justice generally. Further, there is no consensus on the moral principles to be applied by an AI model for AADR, or whether moral principles should be, or need to be, instilled in the model at all.

That said, there are disputes which do not require much (if any) emotional intelligence or moral decision-making. For example, simple and low value commercial disputes which require a plain application of the law and might normally be dealt with under the streamlined Part 8 procedure of the Civil Procedure Rules (i.e. those without a substantial dispute as to the facts). Further, some use cases may in practice be more facilitative and not require those human qualities. For example, an automated mediator may merely be testing the logic of an argument with a party rather than providing binding decisions as outputs. It is in these spaces that automation, with appropriate safeguards, may be able to improve outcomes, opening up access to justice for low value disputes which parties are otherwise forced to abandon due to costs. It will be important to carefully consider whether a dispute is suitable for AADR based on its value and complexity.

Second, the inability to interrogate the decision-making process of generative AI models is a further significant hurdle. Due process and trust in the fairness of decision-making is the foundation of dispute resolution and the administration of justice. While it is possible to have AI models which are capable of being interrogated, that is often to the detriment of the accuracy of the output. Notwithstanding the black box, there may be a threshold at which parties are content to trust an AADR algorithm with high accuracy without the ability to interrogate the output. This threshold might be low value (or otherwise less-consequential) disputes. On the other hand, there may be parties that decide this approach to be wholly inappropriate where their rights are impacted in a binding way (for example, as with expert determination or arbitration).

If it comes to a choice between abandoning a dispute because the costs of traditional dispute resolution or ADR outweigh the potential benefits, or proceeding with an accurate AADR system, many businesses and individuals may opt for the latter, arguably increasing access to justice. Further, if the AADR tool is only used to help reach a voluntary settlement as part of negotiations or a mediation, then the inability to interrogate the reasoning of a decision will be less consequential.

Third, bias and hallucinations are long-standing issues with generative AI which impact both the ability of AADR to perform its function and also the trust in the proper administration of justice by AADR. That said, as stated above, a dispassionate AI, if used in appropriate situations, may be able to administer fairer decisions. Further, the risk of errors and bias will be reduced where the AI model is trained on high quality data, is properly monitored and updated, and is narrow in scope. Simpler commercial disputes should be less susceptible to bias or discrimination where the subject matter is more anodyne and does not concern an individual's fundamental rights. It is also worth considering how free of error and bias human decision makers are.

Finally, automation bias is something to be taken seriously. A key concern arising from this issue for the administration of justice is that a party may be more likely to wrongly relinquish a good argument or case if an AI (wrongly) disagrees with it. A further concern is that parties may use AADR for unsuitable cases, for example those which are complex and multi-faceted, or those which require considerable emotional intelligence. Misuse of AADR in these ways would result in a bad outcome for the party and may ultimately undermine trust in the administration of justice.

CONCLUSION

Much of the success of AADR will depend on parties understanding how it works, understanding its limitations, and consequently using it for appropriate disputes. An appropriate starting point for automating justice is implementing AADR for low value, simple, business-to-business commercial disputes which require a plain application of the law. The forms of ADR which are most likely to be automated in the near future are:

  1. Mediation, because it focusses not necessarily on the substance of the dispute but on narrowing it by testing the logic of arguments, and because of its non-binding nature; and

  2. Expert determination in respect of narrow areas of expertise for which in-depth high-quality training can be provided.

AI is not yet ready to deal with complex disputes requiring creative or ethical analysis and so should be limited to cases where those skills are largely not required. Beyond the abilities of AADR, its opaque outputs – in a world where AI famously still exhibits bias and hallucinations – may exacerbate rather than resolve disputes, particularly where considerable sums are at stake. However, trust in AADR can be built if it can be shown that the AADR output is accurate (i.e. matches with human neutral third parties the vast majority of the time). That trust may be sufficient for parties to deploy AADR for their low value and simple disputes, thereby increasing access to justice by resolving disputes which otherwise might not be pursued due to costs or other constraints.

In the meantime, as the technology surrounding AI advances, the temptation to deploy automation to the administration of justice will grow. There are important ethical questions to be considered before that can be done safely.

Footnotes

[1]  Pablo Cortés, "Artificial Intelligence in dispute resolution" (2024) C.T.L.R. 30(5), 119-127, 123.
[2]  Joe McKendrick and Andy Thurai, 'AI isn't ready to make unsupervised decisions' Harvard Business Review, 15 September 2022.
[3] Ryan Abbott and Brinson Elliott, 'Putting the Artificial Intelligence in Alternative Dispute Resolution: How AI Rules will become ADR Rules', (2023) Amicus Curiae, Series 2, Vol 4, No 3, 685-706, 692.
[4] Mark Coeckelbergh, The Political Philosophy of AI (Polity Press, 2022), p59.
[5] Susie Alegre, Human Rights, Robot Wrongs (Atlantic Books, 2024) p102.

Get in touch

Read David Bufton Profile
David Bufton
Back To Top