AI-Generated Police Reports: Useful or Dangerous?
Generative AI is slowly affecting all areas of law. That now includes how law enforcement writes police reports. District attorneys and criminal defense lawyers are taking note.
Recently, police officers in cities nationwide have begun pilot programs that use AI-powered software for drafting police reports, leading to concerns over the accuracy and reliability of these chatbot-written documents. Will they withstand courtroom scrutiny?
How AI-Powered Police Reports Work
Axon, a company previously known for making Tasers, has begun selling AI-generated police reports to city police nationwide. The software, known as "Draft One," involves taking data from audio caught on body cameras and transcribing it using artificial intelligence, generating a detailed incident report in seconds. A traditional police report typically takes police about thirty minutes to one hour to complete.
A pilot program out of Oklahoma City shows that AI-generated police reports can be accurate, according to reporting by the Associated Press. One officer interviewed claimed that in some cases the reports were even more precise than those written by police officers themselves. Bodycams can capture every word, and generative AI can review and summarize it all, instead of having to write information based on an officer’s memory of the events.
Legal and Procedural Concerns
Even though this technology has proven to work well, it has not kept legal scholars and others away from asking questions about the integrity of these reports and whether they can be relied on in high-stakes criminal cases.
What happens if a police officer must testify using an AI-generated report, only to remember circumstances slightly differently? It would be devastating to a prosecution's case to have a police officer disagree with their own police report. This concern has led to some pilot programs, such as the one in Oklahoma City, only using the technology for traffic offenses. Other cities, however, have gone all-in on the new tech. In Lafayette, for example, officers have the choice to use Draft One in any report they choose. Meanwhile, the Fort Collins pilot program uses it for almost all cases, but they told the Associated Press it doesn't work as well in noisy environments.
Questions about the accuracy and effectiveness of technologically assisted devices are not new. Artificial intelligence exists in various tools to help police officers, including facial recognition applications. However, each generates unique concerns, from ethical implications to racial bias to the admissibility of AI-generated materials in court.
Will police reports powered by artificial intelligence become more common? If so, the entire criminal justice system will have to adapt to ensure the safety of integrating these technologies and implement safeguards to ensure their accuracy and reliability.
Although these tools can improve police officers' efficiency, it is uncertain whether they meet ethical guidelines to protect the public from issues like police targeting, misinformation, and discrimination. This is particularly true because the officers can choose whether to use AI or not. If they don't like the AI-generated report, they can write their own. In this way, it's different from body cameras, which in addition to protecting officers and aiding prosecutions, can also be a tool in civil rights lawsuits when officers violate the constitutional rights of suspects.
But, as Axon notes, up to 40% of a police officer's time is spent writing police reports. Certainly, saving that kind of time will be an enticement for many police departments to consider using this or similar tech.
Related Resources:
- Rapper Pras Michel Contests Conviction Because of Lawyer’s Use of AI (FindLaw's Practice of Law)
- Bumpy Road Ahead for All in Adoption of AI in the Legal Industry (FindLaw's Practice of Law)
- 11th Circuit Experiment Holds Useful Lessons on the Use of Generative AI (FindLaw's Practice of Law)