top of page

Replacing Arbitrators with Artificial Intelligence in International Arbitration

Arnav Joshi*


Artificial Intelligence(AI) has developed to such an extent that it has steadily started taking over jobs in almost all sectors which were always thought to be needing human intelligence. Dispute resolution is no such exception to this. Scholars and researchers have analysed the scope of AI in International Arbitration. Many of them have even suggested that arbitrators powered by AI will replace the job of a human arbitrator in the near future. While this prediction may seem time-efficient, powerful and glittering on paper, AI comes along with its obvious merits, nonetheless completely replacing an arbitrator’s job by an algorithm has its limitations.


One of the most important reasons why AI is pegged to replace a human arbitrator is that humans often unconsciously let their stereotypes and prejudice creep into their actions. An arbitrator is supposed to be free of bias or an internal interest leaning towards any party, but in most cases, and often unknowingly an arbitrator’s decisions are biased to a certain degree. AI lacks consciousness, emotions and feelings, hence there is no scope of any decision to be biased or unfair (at least theoretically). Several countries have started exploring the option of employing AI in their arbitration institutes. One of the primary reasons why AI is thought to be useful in arbitration mechanisms is that algorithms programmed specifically for this purpose will allow it to collect and analyse data within seconds which would reduce the arbitral proceedings drastically as opposed to the traditional method.i While using AI for assistance in legal research, document review, speech recognition, inter alia, is acceptable allowing AI to decide arbitral matters is nowhere near as effective as it sounds. A major setback in using AI is that it is not able to provide reasons as to how it approaches a viable solution and also the fact that every case is different with unique circumstances, which cannot be dealt with by only using previous arbitral awards and data. Societal norms and human morals of compassion and empathy, which are the very origins of governing laws, must be applied.


Technological Black-Box:


US trial courts have started using an AI tool named COMPAS which is used to assist judges to predict the tendency of an offender repeating an offence in the future. In the Loomis caseii, the trial court used COMPAS to get a risk score and based on this score Loomis was sentenced to 6 years in prison. The use of COMPAS to assist judges has been heavily criticised by a lot of people. The main argument made by the US judiciary to use this risk assessment tool is to provide a bias-free risk score of an offender to repeat the crime in times when judges cannot form a clear decision.iii When Loomis appealed, the court held that the algorithm used to come up with the risk score cannot be disclosed due to trade secret concerns. The consequence of using this technology is that both parties cannot be giving the reasoning for the AI software’s decision. This is very clearly violative of the party’s due process rights.


Most international arbitration agreements require the arbitrator to produce a reasoned award. Even though the AI-powered tool used in US trial courts is programmed for criminal matters, in international arbitration too the argument to incorporate AI into the system for making arbitral awards is indistinguishable that is, to come up with a fair decision free of any human bias. Hence similar to the Loomis case, if AI starts rendering arbitral awards, the aggrieved party may not be able to challenge the award, owing to this technological black-box.iv


A reasoned award assures the parties as to the nature and quality of the arbitrator and will aid in case either party uses the judicial appeal procedure of the arbitral award.v That is why reasoned arbitral awards are a requirement in many jurisdictions and seldom not mentioned in arbitration agreements. As the algorithm may not be disclosed for obvious business reasons, it may prove to be a huge weakness in the system. A human arbitrator who can give reasoned awards will certainly resonate confidence to both the parties. The idea of justice without giving a party the right to appeal seems incomplete and not fair at all regardless of how precise a robo-arbitrator’s arbitral award may be.


Need for Human Attributes:


An AI software does not have any emotions or any sense of justice for that matter. For a robo- arbitrator anything and everything is data. AI will rely on previous records of arbitral awards of international arbitration (most arbitral awards are not published in present times due to confidentiality). Justice in every form is a human virtue.vi Every international arbitration dispute is different with unique facts and circumstances so judging a particular dispute only on the basis of a previous award seems inappropriate. Application of legal text into matters is to be done keeping in mind human moral values. These values of compassion, empathy and equity are societal norms and have not been explicitly written down anywhere as a text. Therefore, it would pose a huge challenge for an algorithm to apply these moral values while giving arbitral awards as these values are essentially human virtues and cannot be read and processed like other data.


AI lacks any kind of moral system hence it would be difficult for it to strike a perfect balance between complex circumstances surrounding every dispute with interpretation and application of the law.vii An algorithm at least in the near future is always going to be deprived of human virtue. Therefore, even though it might arrive at a correct decision by using precedents and a textual interpretation of the law, the decision may not always be fair to the parties involved.


Another drawback of employing AI to preside over international arbitrations is that even these softwares are not 100% bias-free. After all, algorithms are designed by humans so there is a chance that human bias can creep into the programming of these algorithms. It is no secret that AI programs are criticised for being prejudicial. A report published claimed that the risk assessment tool COMPAS, which was discussed earlier, is biased and quite possibly unreliable.viii At least in its early stages, machine learning is going to suffer from high-risk bias as evinced from a study of past cases. In this case, human intervention is essential to review an arbitral award generated by AI.


Conclusion:


These were not the only hindrances that AI will face replacing humans as arbitrators Along with these, another catch is that developing AI as robo-arbitrators would require arbitration proceedings to be published unredacted which contradicts the basic principle of confidentiality in an arbitration agreement. Confidentiality is one of the major reasons why parties opt to resolve disputes through arbitration in the first place. But the drawbacks analysed in this article are reasonably sufficient to conclude that AI cannot replace the job of a human arbitrator completely. There is always going to be a discrepancy in not getting reasoned arbitral awards by robots. This not only will be opposed by parties participating in the arbitration but will be in all likelihood set aside in almost all jurisdictions around the world. Nothing can replace human virtue which is the most fundamental requirement for justice. Not every dispute can be handled through data analysis and reviewing precedents. Yes, maybe AI can be very efficient to assist arbitrations with legal research, speech recognition and appoint authority based on qualification, experience and level of satisfaction of parties in previous cases etc. But is the system really ready to hand over the job of an arbitrator to algorithms just to save time and maybe uniformity of arbitral awards but compromise human values which are the genesis behind laws and justice?

 

*(3rd-year student at Jindal Global Law School, Sonipat )

i The Path for Online Arbitration: A Perspective on Guangzhou Arbitration Commission’s Practice - Kluwer Arbitration Blog, Kluwer Arbitration Blog (2019), http://arbitrationblog.kluwerarbitration.com/2019/03/04/the- path-for online-arbitration-a-perspective-on-guangzhou-arbitration-commissions- practice/?doing_wp_cron=1594709438.1868491172790527343750. (last visited Jul 22, 2020).

ii Loomis v. State 881 N.W.2d 749 (Wis. 2016) iii J.J. Prescott, State v. Loomis Harvardlawreview.org (2017), https://harvardlawreview.org/2017/03/state-v- loomis/ (last visited Jul 22, 2020). iv Dressel Julia and Farid Hany, The accuracy, fairness, and limits of predicting recidivism, Sciencemag.org (2017), https://advances.sciencemag.org/content/4/1/eaao5580/tab-pdf (last visited Jul 22, 2020). v Reasoned Awards in International Commercial Arbitration - Kluwer Arbitration Blog, Kluwer Arbitration Blog (2016), http://arbitrationblog.kluwerarbitration.com/2016/02/19/reasoned-awards-in-international-commercial- arbitration/?doing_wp_cron=1595073631.9092049598693847656250 (last visited Jul 22, 2020). vi Aristotle, Nicomachean Ethics, 350 B.C.E Translated by W. D. Ross. vii Artificial intelligence in international arbitration: from the legal prediction to the awards issued by robots, Garrigues.com (2019), https://www.garrigues.com/en_GB/new/artificial-intelligence-international-arbitration- legal-prediction-awards-issued-robots (last visited Jul 22, 2020). viii Machine Bias, ProPublica (2016), https://www.propublica.org/article/machine-bias-risk-assessments-in- criminal-sentencing (last visited Jul 22, 2020).

720 views0 comments

Comentarios


bottom of page