Will AI ever replace your friendly local attorney?
“Ladies and gentlemen of the jury, today we are gathered here to discuss one of the most heinous crimes committed in the history of our country – the assassination of President Abraham Lincoln. The prosecution contends that the accused, John Wilkes Booth, is responsible for this horrific act, and we will prove beyond a reasonable doubt that he is indeed guilty.
“In short, we will paint a clear and compelling picture of a man who committed an act of unspeakable violence against our nation’s leader, and who deserves to be held accountable for his actions. We ask that you carefully consider all of the evidence presented in this trial, and that you return a verdict of guilty for the murder of President Abraham Lincoln.”
Do you remember, in middle school, reading this opening statement from the murder trial of the man who killed our nation’s 16th President? Well, you shouldn’t, because it never happened. Instead of being written by a mid-19thcentury litigator, this opening statement was written entirely by ChatGPT, an artificial intelligence program that is accessible to anyone for free.
Artificial intelligence (AI) has its roots in the mid-twentieth century, when pioneers in computer science and mathematics began exploring the concept of creating machines that could perform tasks that would typically require human intelligence. In 1950, British mathematician and computer scientist Alan Turing published a seminal paper titled “Computing Machinery and Intelligence,” in which he proposed the Turing test as a measure of a machine’s ability to exhibit intelligent behavior equivalent to that of a human.
In the following years, computer scientists made significant strides in developing the algorithms and hardware necessary for AI research. One landmark development was the creation of the perceptron, a type of artificial neural network developed by Frank Rosenblatt in 1957. The perceptron could learn to recognize simple patterns in data and was seen as a major breakthrough in the field of machine learning.
The 1960s and 1970s saw an explosion of interest in AI research, with funding pouring in from governments and private industry. Researchers developed a range of techniques for building intelligent systems, including expert systems, rule-based systems, and natural language processing. However, progress in the field was often slow, and initial excitement about the potential of AI began to wane in the 1980s. Nevertheless, advances in computing power and the development of new algorithms continued to drive progress in AI research, leading to the development of modern machine learning techniques such as deep learning, which have revolutionized the field in recent years.
Recently, artificial intelligence has boomed in popularity; from Siri, who may or may not be listening to all of our conversations from our pockets, to AI art machines that can take a typed prompt and turn it into a classical painting. It seems that every day, more and more of the mundane tasks of every day life are being automated, and the differences between machine creations and those of the human mind are blurring.
The advancement of artificial intelligence raises more questions than it currently answers, and perhaps the most common question is whether the way in which we interact with each other will become obsolete in the near future. IBM, an early-adopter and even earlier-creator of artificial intelligence systems has already started imagining a world in which machine learning and artificial intelligence takes the lead in the field of medicine. As an example, researchers have used AI to help detect Alzheimer’s risk with over 90% accuracy in one study.
But what about the use of AI in the courtroom? Could we one day have to listen to a computer tell us the law? Will juries be replaced by heartless automatons? Will a robot one day address its motherboard as “your Honor?”
The answer, as is far too often the case when you get lawyers involved, is “it depends.” The discussions surrounding the use of artificial intelligence in the courtroom can be broken down into three key questions: (1) Can we? (2) Should we? (3) Do we really want to?
* * *
First, and perhaps the easiest question to answer is, “can we use artificial intelligence to replace litigation attorneys in the courtroom?” The short answer is, for now, no.
Despite flashy headlines about AI machines serving as defense attorneys in speeding ticket trials, thus far no AI lawyer has appeared on behalf of a client in an United States Courtroom. In January of 2023, Joshua Browder, the CEO of tech startup DoNotPay planned to use Bluetooth devices to allow speeding ticket Defendants to have arguments, written by AI, broadcast into their ears during their trials. This scheme would have allowed the Defendants to act as their own defense attorneys and argue the facts and the law as the AI presented it to them via the Bluetooth earphones. This plan was shelved, however, after prosecutors and state bars throughout the country began to threaten Browder with criminal penalties related to the unauthorized practice of law. The unauthorized practice of law (“UPL”) refers to the act of practicing law without a license or the necessary credentials required by a jurisdiction’s legal system.
Thus, while the issue of whether using an AI machine to feed you information to regurgitate in court is truly UPL has not been litigated fully in any American Court, for now, it appears that Courts are not willing to allow a machine to take the reins in any courtroom.
* * *
Second, we address the question of whether, if we are allowed to replace your friendly local attorney with a robot, should we do so?
The basic premise of machine learning and artificial intelligence is that a human must program the computer to intake data, synthesize it, and spit out a result based on that data. While machine learning has taken great leaps and bounds in the last decade – and can now search for and identify data from across the internet on its own accord – it still relies on the programing of human beings. As explained by Matthew Stepka, the Managing Partner of Machina Ventures, an investment firm focused on early stage AI stated in an article for Business Law Today:
“The potential benefits of AI in the law are real. It can increase attorney productivity and avoid costly mistakes. In some cases, it can also grease the wheels of justice to increase the speed of research and decision-making. However, AI is not yet ready to replace human judgment in the legal profession. The risk of embedded bias in data that fuels AI … must be overcome before using the technology in some legal contexts.”
Stepka, in this article, identifies a glaring area for concern within the issue of using AI for litigation: the risk of embedded bias. This is an issue that every field in which AI is used must cope with. According to the World Economic Forum, AI bias is a fundamental problem that humanity will have to deal with as we continue to utilize machine learning in our everyday lives. The WEF defines AI Bias as when the machine “gives consistently different outputs for one group of people compared to another.” An example of AI Bias in healthcare was discussed by researchers at the Harvard School of Public Health, who found that AI that was coded/trained to detect heart disease was effective in finding it in Caucasian patients, but not African American patients, because of the data sources with which it was trained.
It is not hard to imagine such issues cropping up within the legal field, which has its own history of biases to overcome. In the legal field, public trust is necessary. It is axiomatic that biases encoded into the court system would have outcomes totally contrary to goals of a fair and impartial system.
A second issue which must be addressed when discussing whether we should allow AI to replace attorneys is the accuracy of AI. While machine learning has come far in the past decade, with more and more data being able to be collected by these computers, there is still a fairly large gulf between what AI believes to be true, and what is actually true.
For example, if you ask Chat GPT, “Does South Carolina have a Dram Shop Statute,” you will receive the following answer:
“Yes, South Carolina has a dram shop statute. The statute is known as the “South Carolina Alcoholic Beverage Control Act” and it can be found in Title 61 of the South Carolina Code of Laws. Specifically, Section 61-6-2220 of the Act creates a cause of action against licensed establishments that serve alcohol to an individual who is visibly intoxicated, and that person subsequently causes injury or death to themselves or another person as a result of their intoxication. The statute allows for the injured party or their representative to bring a civil action against the establishment that served the alcohol, seeking damages for the harm caused by the intoxicated individual.”
The reality is that South Carolina does not, in fact, have a “Dram Shop Statute.” Many states do have statutes that create private rights of action against bars and restaurants for injuries caused by patrons they overserved. But in South Carolina, we do not. Instead, South Carolina has a long and complicated set of case law that allows these same suits, using the statute that Chat GPT cited. While you may say, “What is the difference then? AI got it mostly right.” However, the difference in Chat GPT’s response, and the response of a licensed South Carolina attorney is the difference between upholding and violating an attorney’s Duty of Candor to the Court. There is a level of reasoning and an ability to see the gray between the black and white that a human can provide, but that a programed computer may not be able to make a distinction out of. While AI’s answer to the Dram Shop question may have the same result as the correct answer, the reasoning is incorrect, and in the legal world, the reasoning is everything. AI is coded to give you a straight-forward answer, but what if the question, as is the case in most legal disputes, does not have a straight-forward answer?
As to whether we should use AI in the courtroom – should the Courts allow it – the current ways in which machines learn, by their encoding, seems to be insufficient for the legal field. It is possible that, as this technology advances, that the gap between what we have and what we need will narrow, but for now, the use of AI runs too high of a risk of unintentional bias and outright falsehoods to be of use in the courtroom as a replacement for your local attorney.
* * *
Finally, if the answer to the first two questions is yes, we are faced with the question of, “do we want to replace litigation attorneys with artificial intelligence?”
To answer this question, we have to ask ourselves what the advantages of a human lawyer are. As discussed above, questions of law are often multi-faceted, and can change based on an infinite number of variables. These variables range from the composition of the jury, the facial expressions and micro expressions of other humans, tones of voice, weather, historical context, and an endless number of other stimuli that can and often do change repeatedly throughout the course of a conversation, argument, or statement to the jury.
Much of the work that litigators do is not based solely on the hard facts of a case or the statutes and case law that govern their arguments. Much of its foundation is laid in the deeply interpersonal human connections and reasonings that only a human can create.
When faced with a fire, a parent might run into a home fully engulfed with flames just for the chance to save their child; but AI would run the numbers, see that there is only a small chance of success and survival, and fail to act. But we do not look at that scenario and say, “the parent acted unreasonably.” To the contrary, many humans would say that the AI’s decision not to act was unreasonable, cold, and inhuman.
Think of the most famous moments in trial history that you can remember. Why does “If the glove does not fit, you must acquit,” work so well? Logically, and from the viewpoint of AI, the glove might not fit because of temperature differences, aging of the leather glove, the wearer’s weight, or any host of variables. Logically, according to AI, that is not a sound argument. But the jury thought that it was, and it worked. It worked because humans are irrational beings in that we don’t just look at the Ones and Zeros of a problem and then make a binary decision, like a computer does; we take into account a plethora of other factors, some rational, and some irrational (from the viewpoint of a machine).
When asked “what is love?” you may think of a person, a memory, a smell, or some intangible feeling in your stomach. It may take you a sentence, a paragraph, or a year to explain it. You may not be able to explain it. Every human has a different answer, and all of them are correct. Bob Marley defined it as:
“You open your heart knowing that there’s a chance it may be broken one day and in opening your heart, you experience a love and joy that you never dreamed possible. You find that being vulnerable is the only way to allow your heart to feel true pleasure that’s so real it scares you. You find strength in knowing you have a true friend and possibly a soul mate who will remain loyal to the end. Life seems completely different, exciting and worthwhile. Your only hope and security is in knowing that they are a part of your life.”
When asked, “what is love?” AI’s response is this:
“Love is a complex and multifaceted concept that has been studied and debated by scholars, poets, and philosophers for centuries. At its core, love involves a deep and profound affection for someone or something. This affection can take many different forms, including romantic love, platonic love, familial love, and even love for ideas and hobbies.”
Which answer is more convincing? Which answer to encapsulates the human element to the question? AI can give you a clinical response, one that may (or may not) be factually correct, but it cannot connect in the same way that a human can.
When asking whether we want to use AI to solve our legal disputes, we are really asking whether we want to remove the uniquely human reasonableness from our system of order. The reasonableness that causes us to understand that what is legal may not be what is right, that which allows us to use our common sense when evaluating the truthfulness of someone’s testimony. AI can give you the answer to whether you will survive running in to the burning building, but your humanity will give you the reason why you should run in anyways.
* * *
Artificial intelligence and machine learning have revolutionized the way in which we interact with the world. If you had told the great navigators of the 1500s that we could ask a little machine in our pockets where to find Japanese-Mexican fusion food within a mile of where we stand, they would think they had gone insane. AI has allowed us to take leaps and bounds in progress towards combating several diseases, and it has even allowed attorneys to streamline their research. But imbedded within this new and exciting technology are risks of bias, inaccuracies, and inhuman calculations. It is in these areas where attorneys plough their trade.
For a working justice system, there has to be work towards eliminating these biases and inaccuracies, and there must be room for humanity. Just ask Chat GPT, who stated:
“While AI has made significant advancements in recent years, it is unlikely that AI will completely take over for lawyers in the courtroom. While AI technology can assist lawyers in performing research and analysis, it cannot fully replace the judgment, empathy, and advocacy skills that are required in the courtroom.
In a courtroom, lawyers must interact with judges, juries, and witnesses, and use their experience and training to argue their case persuasively. This requires a deep understanding of legal precedent, courtroom procedures, and the ability to read the emotions and reactions of those in the courtroom.
While AI can certainly aid in legal research and analysis, it is not currently capable of performing these more nuanced and interpersonal tasks. Additionally, there are ethical and regulatory concerns around the use of AI in legal settings, particularly when it comes to issues of bias and fairness.
Therefore, while AI will certainly play an increasingly important role in the legal profession, it is unlikely that it will completely replace human lawyers in the courtroom anytime soon.”
Andrew T. Smith is an Associate Attorney with Collins & Lacy, P.C., practicing within the firm’s Retail & Hospitality Group. He is very much a human, and not an AI computer.