Study Suggests Chatbots Could Lessen Legal Chores
Do you litigation attorneys out there have complaints about drafting complaints? Are you M&A lawyers getting carpal tunnel typing up so many contracts? Well, the future of the legal field may be bringing some good news your way: a recent study suggests some definitive benefits in embracing generative artificial intelligence to reduce the scutwork that is the bane of many a lawyer's existence.
It's been almost a year since OpenAI released ChatGPT, now a household name. Since then, professionals worldwide have been asking when AI will come for their jobs – the legal industry being no exception. Law schools across America quickly rolled out formal policies against using AI on exams, school assignments, or applications. Attorneys have attempted to use chatbots to write their briefs for them; at least one was caught when that endeavor went south and the software spun fake court cases from whole cloth. Partly based on such instances, judges have banned the use of AI in their courtrooms. Further, there have been various concerns about AI violating IP law and over getting in legal trouble for using AI.
But even in the short time since ChatGPT made headlines, and competitors followed suit with their own large language models, the core questions surrounding the use of the technology have not been answered. Questions specific to the legal industry include:
- Will human attorneys become obsolete, as robot lawyers replace them? Or will AI simply help humans do their legal work faster and better?
- Are there certain legal roles, tasks, and skills that AI is inherently suited to, and others that they should not venture into?
- How should our systems of jurisprudence and legal education adapt to the increasingly ubiquitous – and perhaps inevitable – presence of AI?
Everyone seems to have their own opinion, but speculation can only get us so far. What does hard data show? Most of the limited studies that have so far been conducted test AI's ability to work on its own, such as its ability to conduct its own legal analysis. But to address the questions of AI's usefulness as a legal tool, a group of students at the University of Minnesota Law School set out to see if law students were better able to do their tasks with these chatbots.
The Minnesota Law School study tried to simulate the common daily duties of lawyers across different practices when drafting various legal documents. It featured a complaint, a contract, a section of an employee handbook, and a client memo. The participants in the study were 60 law students at Minnesota, one of the best law schools in the country (currently ranked #16).
At the onset, the participating students received a couple of hours of online training on how to use the AI, OpenAI's ChatGPT powered with GPT-4.
For the complaint-drafting task, independent legal research was not required; rather, participants were provided with the necessary information in a "closed universe" (much like in bar exam case simulations). They were given a five-hour time limit.
For the contract-drafting task, the students were provided with the key terms of the contract in "legalese" and instructed to draft a contract around those terms in plain English. They were given a two-hour time limit.
For the employee handbook-drafting task, the students were instructed to conduct their own legal research. They were not provided with the relevant laws but had to look them up. This task had a one-hour time limit.
For the client memo task, the participants were given four cases to read on which to base their analyses. They were not meant to conduct independent research. They were to use the provided materials to come up with advice for a legal client. This task had a five-hour time limit.
The students were divided into two groups, and every student had to complete each task. One group was instructed to use GPT-4 for the first two tasks, but not for the latter two. The other group had the reverse instructions: to use the chatbot for the latter two tasks, but not the first two.
The study produced both qualitative and quantitative results: the quality of the produced document, as well as the time required to complete it. In terms of quality, the study found that on average, there was only a slight boost in quality with students who used AI over those who did not. However, the improvement in quality varied a lot based on the student's baseline. Students who performed the worst without the chatbot saw the most improvement when they used it. Those who were already producing the best quality work did not see much change.
When it came to improving the speed of completion, though, things were different. The results showed that using AI caused a substantial increase in the speed it took a student to complete a task, on average. Moreover, this result didn't have nearly as much variance among the baseline. In other words, no matter how fast or slow the student was to begin with, they still saw a significant increase in speed by using the chatbot.
At the end of the experiment, each student was given an anonymous survey about their experience. The students reported increased satisfaction from using AI to complete legal tasks and positive reviews for its impact on quality and speed. The students were able to tell which tasks the chatbot helped them with the most, even before seeing their grades. They also indicated that the experiment improved their skills in using AI and that they were now more likely to use similar tools in the future.
The study suggested that young lawyers appreciate the benefits of using AI in their work, leading to increased work satisfaction and enthusiasm. It encouraged leaning into AI as a crucial tool for lawyers and predicted proactive adoption and competitive pressures driving its integration into the legal industry. Although the study predicted that using AI will result in a broad range of legal services becoming more efficient, it also noted that this pattern will be uneven across practice areas, type of task, and skill level of the lawyer.
What advice does the study give to legal practitioners and educators? On the one hand, it says that law schools should ban the use of generative AI in "doctrinal" first-year courses and on law school exams. This is in part because it points out that, according to the experiment results, AI disproportionately helps lower-performing students. However, the study's authors encourage law schools to develop upper-level courses that teach students how to effectively use AI in their legal tasks and future careers.
Will the notoriously tech-averse and traditional legal industry embrace this advice? It's much too soon to tell. But whether or not you learn it from your law school or on your own, it may be worth poking around with AI to see what it can do for you in your legal work – just make sure you fact-check!
- Michigan Law School Bans ChatGPT Generated Applications (FindLaw's Practice of Law)
- Could You Get in Legal Trouble Using ChatGPT? (FindLaw's Law and Daily Life)
- Attorney Faces Sanctions for Citing Fake Cases from ChatGPT (FindLaw's Practice of Law)
You Don’t Have To Solve This on Your Own – Get a Lawyer’s Help
Meeting with a lawyer can help you understand your options and how to best protect your rights. Visit our attorney directory to find a lawyer near you who can help.