Artificial Intelligence and the court
Artificial Intelligence (AI) is revolutionising human functions across different industries. Legal industry is not an exception to that and is already affected by AI fever in many countries to bring positive changes to the traditional legal profession. It applies to streamlining legal processes in courts, analysing legal instruments, and is even being used in certain jurisdictions to predict and prescribe judicial outcomes. Apart from these positive features, there are some limitations of using AI in legal proceedings which are ignored but should be shed light upon.
First of all, we need to understand that AI is actually a prediction tool by nature. Huge historical data are used to train AI powered tech tools and from there it learns to identify probabilistic patterns. It means, it does not actually reason, think or question like human beings although its output looks convincing and it can also mimic human style of reasoning. What is appalling is that it sometimes generate false information which apparently looks authoritative.
It does not even understand human language. Rather, it merely forecast the next token in a given sequence of words. Therefore, it cannot rectify previous human error but comes with its own set of biases, which stem from the training data, and subsequently reinforce them.
Law, legal proceedings and legal profession are not just about rules, rather they are about people as well, as they have to deal with the problems people face. Moreover, law is not to be understood as a mathematical formula to be mechanically inserted into a scientific system. Instead, it is a living and evolving organism that changes with societal values, cultural norms, and ethical considerations. Contrarily, AI systems do not have the emotional empathy, ethical judgment, human intuition, authority, and/or experience that reflect societal values and are mandatory for the legal profession.
AI also lacks the ability to assess a situation at a particular moment, which human beings champion. Human lawyers do not only apply law in court cases, but they also have skills to adopt with the judges and court environment. They adjust their responsibilities to the court, respond to the mood, tone, and dynamics of the moment accordingly.
Alongside, every case is different, and there is valid justification why every case ought to be judged on its own facts cautiously. Public need to see and feel that law is being applied justly and putting people's problem to algorithm may create a barrier between the public and the judicial process.
Furthermore, self-represented litigation with the help of AI chatbots is gaining popularity as it saves the cost of lawyer. However, such AI chatbots cannot argue in favour of its client when the judge asks a question or seeks clarification during a hearing. In addition, it will not be able to object to improper cross examination or unfair treatment in the court. Apart from these, court is like a playground where the other side might pull a manoeuvre suddenly. Hence, one needs to think promptly and make judgment calls on the spot which AI is not able to do in self-represented cases.
Additionally, since AI or its programmers or deployers are still free from any criminal responsibility, the court will penalise the human person even if it is the AI's fault. Recently, the Upper Tribunal (Immigration and Asylum Chamber) of the UK warned the lawyers about the use of AI, after finding a fictitious judgment generated by ChatGPT being cited in the court. The lawyer is also referred to the Bar Standards Board of the UK for investigation. Several other lawyers across jurisdictions also faced criticism and punishment from courts and regulatory bodies for referring to ChatGPT generated false information in real cases to the court.
The role of the judiciary is not just to process cases efficiently but to weigh moral consequences, to ensure fairness, and to uphold the rule of law in a way that no machine can replicate. Therefore, we must draw a line between the application and use of AI generated outputs and our own conscience in complex, sophisticated and incidents having impacts on human lives, that play out in the courtroom.
The writer is doctoral researcher on the use of AI in judicial decision-making at the School of Law, University of Galway, Republic of Ireland.
Comments