Bumpy Road Ahead for All in Adoption of AI in the Legal Industry
2023 feels more like Y2K with the exponential growth in artificial intelligence. My, how far we've come from SmarterChild. It seems like just yesterday we could barely imagine the world of "WALL-E"; now, we can't imagine a world without DALL-E.
But with the recent tsunami of tech layoffs, people are naturally apprehensive about the dark side of the newest and strongest AI wave. While we shouldn't blame the bots for most of our job losses just yet (especially since robots might have feelings too), it's reasonable to speculate that exponential advancement in technology may render many human roles obsolete.
Even "safe" artistic roles are in jeopardy, though fear of copyright infringement has halted the release of certain bots such as those that make music. In the legal industry, there are plenty of jobs that are borderline-creative. Since attorneys aren't exactly Beyoncé, should we be worried?
Precedent for AI in the Legal Sector
Using bots in the legal sector is nothing new. This was true even before the pandemic necessitated the notoriously technophobic industry's rapid adoption of remote and digital alternatives, such as virtual hearings, e-filing of court documents, and electronic signatures for contracts.
Even before COVID, companies have been offering D.I.Y. tools to replace run-of-the-mill tasks traditionally handled by lawyers—and for a fraction of the cost. Examples abound of attorney-created forms and automated services in fields of law where more basic or initial steps can be taken without the need for a lawyer. Considering that simple, repetitive processes with "boilerplate" language compose the bread and butter for many smaller firms and solo practitioners, the threat of competition from law-bots is a real concern.
Bots Benefit Clients
But while the rise of AI may be causing existential angst for legal professionals, it seems to be a boon for clients. Because legal services are so expensive and there is no right to counsel for civil disputes, those who can't afford an attorney disproportionately face the consequences of losing their homes, children, jobs, and money. Public defenders, legal aid services, and nonprofit organizations lack the capacity to meet all of the legal needs of low-income Americans.
According to last year's report on the national justice gap by the federal nonprofit Legal Services Corporation, nearly 75% of low-income households experienced at least one civil legal problem in the previous year (a third of such issues attributable to COVID alone), yet 92% of them received insufficient or no legal help. To that end, proponents of access to justice have gained some ground toward employing technology to help, such as legal portals directing users to legal aid and helping them navigate court systems.
DoNotPay Does Not Play
Enter the UK-based corporation DoNotPay, with its A-plus trademark name. The company has for some time offered various law-adjacent services through chatbots, not unlike those you encounter while seeking customer service help, even if you didn't know it. Now, DoNotPay is making headlines for its bold claim of building the world's first robot lawyer.
Before our attorney readers get too scared, it's important to note that pretty much all of the services DoNotPay has so far offered involve very little "real lawyering." Most are just glorified plug-and-chug form generators that take the facts and personal information and generate standardized pleadings, letters, or forms used in contesting things like traffic tickets. But this makes a big difference. The company has successfully helped many thousands of people fight small claims and traffic cases, earning recognition for providing access to justice.
But DoNotPay wanted to take its AI to another level: the bench. Earlier this month, plans were in place to have the bot secretly coach one of its clients at a live traffic court hearing in front of a judge. Not content with the small potatoes of small claims court, DoNotPay CEO Joshua Browder offered $1 million to an attorney brave enough to use it in front of the justices of the U.S. Supreme Court. It's one thing to use AI to deep fake-negotiate down your internet bill (which is technically legal). It's another to violate court rules and deceive a judge by arguing a case with the surreptitious coaching of a robot lawyer.
Unsurprisingly, all of DoNotPay's big talk to the media alerted prosecutors, who threatened to sue. The company eventually walked back its grandiose plans as "not worth it." Probably for the best, since the repercussions of this questionably-legal strategy doesn't just implicate the company. Not only could the consulting attorneys get disbarred for violating ethics rules, but even the clients could be independently charged with their own crimes, such as the "unauthorized practice of law."
DoNotPay should have anticipated this predictable pushback. While courts don't generally have absolute bans on smartphones, there are rules governing when, how, and by whom they can be used. Many courts have a blanket ban on the use of cellphones by observers or anyone not affiliated with the court, law enforcement, or counsel. For situations not governed by any official court policy, you'll often see a practice of unwritten rules stemming from "the judge's discretion" (read: what they ate for breakfast that morning).
The concern is largely two-part: judges don't want any part of their proceedings being recorded, and they don't want the noise disruption inevitably caused by phones. Parties and observers alike are often held in contempt for so much as texting during session. Some judges are notorious for taking disproportionate measures and having little patience. Party lawyers can use the internet at their "attorney tables" for case-related research and to access files, but it's generally unheard of to use your own tech devices when you're actively litigating. At most, the court may allow you to display a PowerPoint or video on court-approved devices and submitted ahead of time, but these must comply with the complex rules of evidence. In no court could a lawyer use their smartphone while making arguments, approaching the bench, or examining witnesses, nor could a witness use their device while taking the stand.
Since courtroom policies are set at the micro level, adopting "RoboCounsel" will be slow and piecemeal. Additionally, bar associations will have to make room for advanced AI through a new set of rules regarding practice, ethics, confidentiality, and accountability.
Regulating RoboCounsel
There's a reason sci-fi tends to cast robots in the roles of law enforcement rather than practice, and it's not just because lawyers would make for a rather unsexy action film.
The rules of conduct, ethics, and accountability governing other sectors are, in theory, more straightforward and less variable between jurisdictions than what lawyers have to deal with. AI ethics in the legal field would have to be tailored to reflect their complex human counterparts — a far cry from "I, Robot's" short and sweet depiction of Asimov's Three Laws.
Like doctors, lawyers are supposed to "do no harm," and have a duty to exercise the care, skill, and diligence used by other attorneys in similar circumstances. But these principles, being more vague and subjective, can make navigating ethics a grey area even for human attorneys. Will robot lawyers stick to the same standard of practice as humans, or other robots? How will regulators account for different companies with different programming capabilities?
Accidentally Widening the Justice Gap
As we have seen, technology can narrow the gap in access to justice, but there are potential ways that AI attorneys could widen it as well, without proper regulation. Given the potential in making parts of litigation and research more efficient, it seems unfair that one party should get the benefit of using AI if the other side cannot afford the same. Would the government risk violating Gideon's promise by failing to ensure equal access to AI?
Ensuring Accountability
Though it's not easy to win such cases, there are avenues for action against a human lawyer who royally messes up their case. For example, a client could sue his human attorney for legal malpractice or ineffective assistance of counsel. But who would a client sue if an AI messes up? The firm that it was "working" for? The developers? The attorneys the creators [hopefully] consulted? These issues aren't unlike those faced by other sectors, like autonomous vehicles.
On the other hand, as with self-driving cars, it seems that robots can be programmed to avoid a lot of the mistakes that result in common legal malpractice. For example, robots could reduce or eliminate human oversights like missing filing deadlines, serving court papers incorrectly, missing the statute of limitations, or even egregious violations like abusing clients' trust accounts or commingling client funds.
None of these issues are insurmountable, but they will require consensus at the state and national levels. For this reason alone, we should not anticipate the legalization of AI in the courtroom anytime soon.
But let's not "fight the hypo"— that never gets you any points on your law school exam. Let's imagine a future where all of this is allowed and regulated. Then the relevant question is: Is technology up for the job?
Is Counsel3PO the Future?
Even if we are a long way from legally using lawbots to their full potential, what could they realistically do for us?
Though the legal sector is unique in its heightened regulation, many of the day-to-day tasks of lawyers are similar to other industries where robots are seen as more of a threat to displace humans. TV shows like "Suits" be damned, we actually spend very little time in court, and a lot more time reading, analyzing, writing, and combing caselaw that's drier than our January resolutions. High school and college students aren't the only ones who could be celebrating potential freedom from tedious essay writing by having chatbots do much of the legwork.
People have already conducted casual, one-off experiments gauging the ability of chatbots to independently execute a range of legal documents from a privacy policy to a Supreme Court brief. To be fair, they weren't exactly "passes;" legal experts in the respective fields pointed out various shortcomings in the bot-generated drafts. But just as no student would (hopefully) be dumb enough to hand in a Spanish essay straight out of Google Translate, no attorney in their right mind would turn in an unedited piece of writing straight out of a text generator to the court clerk. Even in the current practice of human-drafted legal writing, briefs and contracts pass through countless rounds of edits and revisions. Considering that many lawyers loathe the first steps of writing, which involves hours of legal research and synthesis, it's certainly tempting to leverage AI to aggregate caselaw, analyze key takeaways, and compose initial drafts.
But let's not conflate efficiency with ability. Where bots will fall short in various aspects of legal work, just like with any industry, is innovation. We're not going to pretend a lot of what lawyers do isn't glorified copy-paste-paraphrase. If that's harsh, we can at least agree that a lot of arguments made aren't novel (nor should they be—that's kind of the point of having a common law system). The upshot is that, in a lot of instances, AI could be useful in applying established law to a new set of facts.
What AI can't do is change the law by arguing innovative applications. Whether finding a new fundamental right in the "penumbra" of the Constitution or simply arguing for the admission of testimony through a new reading of the Rules of Evidence, the work that lawyers do is, at times, creative. It requires a stretch of the imagination. While DALL-E may be able to quickly render a "painting of a flux capacitor in the style of Van Gogh," it can't be Van Gogh or Doc Brown. It can't innovate its own painting style or be the first to imagine a time-traveling sports car. It can merely do what it's told.
But the efficiency and proper execution of assigned tasks are nothing to sneeze at. While ChatGPT's output still requires an editor, AI can significantly streamline the system. Smart software is already handling the "grunt work" of tasks like document review that law firms hand off to first-years or outsource to agencies. This may be putting some folks out of a job, but perhaps it's making room for those with a law degree to actually use what they learned in school. And like with other sectors, it could create newer, different, and more jobs in the legal industry.
Other Possible Courtbots
Could AI replace other judicial roles? What about tasks that have traditionally been left to judges? Perhaps, depending on the level of court.
Those lucky enough never to have gone to court are likely surprised to learn many of the decisions of trial judges and magistrate judges are rather clear-cut. Much of the function of lower court judges involves keeping order and making sure proper procedure is followed regarding evidence and testimony. Various pretrial motions that a judge grants are usually not complicated and are liberally granted. This includes motions for a continuance (to allow more preparation and discovery before the trial) or motions to amend (to modify a complaint or other filing).
Other motions are more complicated in that they can involve a little bit of legal analysis, such as motions for summary judgment. These can range widely in complexity, but it seems probable that an AI judge could make the calls for easier cases and screen through for the human judge those that involve more nuanced reasoning.
Appellate judges are a different matter. The fine-robed folk sitting at state or federal courts of appeals or supreme courts are typically using a good deal more legal reasoning and application of case law. They're often getting cases that are closer calls (in theory, a lawyer wouldn't appeal a case unless they thought they had a chance) or even issues of "first impression" (meaning that the specific legal question hasn't been asked and answered before, so case law doesn't speak to it directly).
Jury-rigging AI Applications Further
What about juries? After all, jurors, even when properly selected and representative of a diverse demographic, inevitably come with their own shortcomings. Firstly, they are almost inevitably not trained in the law, and may have difficulty following legal instructions from the judge. They may also have a hard time following the esoteric testimony of expert witnesses like engineers and doctors. Nor will jurors be able to erase their ingrained, implicit biases. Despite instructions from the judge, they will inevitably not be to "unhear" testimony that is stricken from the record after a sustained objection.
And jurors are human and flawed in even more banal ways. Funny as it may sound, the problem of jurors nodding off is a serious one. A survey of American judges found that 69% of them had just recently witnessed jurors falling asleep in their courtroom, spanning over 2,300 individual cases. And who can blame them? Trials are droning and dry, and coffee isn't allowed in the courtroom.
Robot jurors would not fall asleep (as long as they're plugged in). Unlike humans, they can listen to instructions to disregard certain testimony later deemed inadmissible. They can be programmed to not consider certain factors, assumptions, or stereotypes in their decision-making (though they can come with their own set of biases). Generally, these all seem like democratic values that juries should aspire to.
But the sacred nature of the jury rests on the democratic ideal of being tried by one's peers. This idea makes replacing juries with robots possibly harder to grapple with than attorneys or judges. Even in a future featuring Klaras (and hopefully not M3GANs), would humans want to put their lives and liberty in the hands of literally cold and clinical droids over warm-blooded souls who may show mercy to a defendant?
This raises another question regarding jury nullification, an important, uniquely human tool of our judicial system that could be jeopardized in a world of AI juries. Jury nullification is technically illogical in that it deliberately disregards the judge's instructions. Robots follow instructions (often to a fault). They can't be moved by certain je ne sais quoi factors from the defendant's or witness's testimony and decide to show mercy even when the facts suffice to prove guilt beyond a reasonable doubt.
Unclogging the Backlog With Ruthless Efficiency
The U.S. wouldn't be the first to introduce some kind of AI into other parts of the courtroom. Most court systems worldwide seem to be suffering a backlog of cases at any given time, a problem only exacerbated by the pandemic.
To reduce their accumulating caseload, the government of Malaysia chose to employ robots in the sentencing of criminal defendants, and China's court system also uses AI to assist judicial decision-making. And many U.S. courts have already been deferring to algorithms for some pretty significant judgment calls, no pun intended. Courts and correction departments have used software for years to run data on criminal defendants to determine a "risk" calculation. This risk determination is used, seemingly at face value, to make both pretrial calls on allowing bail and setting bond amounts as well as sentencing and parole decisions.
The main argument against such use of AI seems to be that the calculations and conclusions ("this person is a flight risk" or "this defendant deserves 20 years") are not easy to audit. This will get harder as bots get "smarter" on their own and go beyond the initial programming of their creators. Judges, by contrast, can explain what sentencing guidelines they relied on or what factors they used in a bond calculation.
And while people often agree that AI is neither good nor evil, many seem to conclude from this that it is neutral (which breaks Kranzenberg's first law of technology) — just as people mistakenly assume that judges are neutral. Assuming that a robot's lack of humanity makes it free from bias can have dangerous consequences when put into practice by courts. Can the system ensure that an AI made the right call or at least used the right considerations? You can look at the code, and you can ask the programmers what parameters and values they used, but it's hard to pick apart a specific decision after the fact or ask the bot to explain its reasoning.
You might ask: How is a black box AI any different from a traditional jury? After all, jury deliberations are supposed to operate in their own black box, in secret and unadulterated by outside influence. When the foreman gives the verdict, no explanation or detail accompanies it. Even after a trial, jurors are not supposed to speak about the case. Perhaps it is their very human nature that justifies this blind trust. If so, it seems that AI will never measure up.
Conclusion?
Attorneys' favorite canned answer, "it depends," falls far short of capturing the sentiment here. Remember, we are speculating about a relatively new technology within an industry that is both notoriously slow to embrace change and highly regulated.
No one can predict with confidence a timeline for if and when we might see bots on the bench. Some types of attorneys (those doing doc review) seem more at risk than others (those doing complex litigation). We can, hopefully, leverage technology to increase the efficiency of backlogged courtrooms by expediting administrative tasks and commonplace motions and to ameliorate the disparities we still see in access to justice.
Ultimately, future changes will likely depend less on technology's ability to effectively replace human judgment and more on society's ability to swallow the idea of letting robots play "judge, jury, and . . . esquire."
Related Resources:
- The HAL 9000 Lawyer (FindLaw's "Don't Judge Me" Podcast)
- Artificial Intelligence: Are We Safe? (FindLaw's Practice of Law)
- Art Forgeries Revealed by Artificial Intelligence (FindLaw's Practice of Law)