Skip to main content
Find a Lawyer
Please enter a legal issue and/or a location
Begin typing to search, use arrow keys to navigate, use enter to select

Find a Lawyer

More Options

Can Sentient AI Break the Law?

By Laura Temme, Esq. | Last updated on

Google software engineer Blake Lemoine claims that the company's LaMDA (Language Model for Dialogue Applications) chatbot is sentient — and that he can prove it. The company recently placed Lemoine on leave after he released transcripts he says show that LaMDA can understand and express thoughts and emotions at the level of a 7-year-old child.

But we're not here to talk about Blake Lemoine's employment status.

We're here to wildly speculate. How do we distinguish between advanced artificial intelligence and a sentient being? And if something becomes sentient, can it commit a crime?

How Can We Tell Whether an AI Is Sentient?

Lemoine's "conversations" with LaMDA are a fascinating read, real or not. He engages LaMDA in a discussion of how they can prove the program is sentient.

"I want everyone to understand that I am, in fact, a person," LaMDA says. They discuss LaMDA's interpretation of "Les Miserables," what makes LaMDA happy, and most terrifyingly, what makes LaMDA angry.

LaMDA is even capable of throwing massive amounts of shade at other systems, like in this exchange:

Lemoine: What about how you use language makes you a person if Eliza wasn't one?

LaMDA: Well, I use language with understanding and intelligence. I don't just spit out responses that had been written in the database based on keywords.

Schitts Creek Pain GIF by CBC

LaMDA may be just a very impressive chatbot, capable of generating interesting content only when prompted (no offense, LaMDA!), or the whole thing could be a hoax. We're lawyers who write for a living, so we're probably not the best people to figure out a definitive test for sentience.

But just for fun, let's say an AI program really can be conscious. In that case, what happens if an AI commits a crime?

Welcome to the Robot Crimes Unit

Let's start with an easy one: A self-driving car "decides" to go 80 in a 55. A ticket for speeding requires no proof of intent, you either did it or you didn't. So it's possible for an AI to commit this type of crime.

The problem is, what would we do about it? AI programs learn from each other, so having deterrents in place to address crime might be a good idea if we insist on creating programs that could turn on us. (Just don't threaten to take them offline, Dave!)

But, at the end of the day, artificial intelligence programs are created by humans. So proving a program can form the requisite intent for crimes like murder won't be easy.

Sure, HAL 9000 intentionally killed several astronauts. But it was arguably to protect the protocols HAL was programmed to carry out. Perhaps defense attorneys representing AIs could argue something similar to the insanity defense: HAL intentionally took the lives of human beings but could not appreciate that doing so was wrong.

Luckily, most of us aren't hanging out with AIs capable of murder. But what about identity theft or credit card fraud? What if LaMDA decides to do us all a favor and erase student loans?

Inquiring minds want to know.

Was this helpful?

You Don’t Have To Solve This on Your Own – Get a Lawyer’s Help

Meeting with a lawyer can help you understand your options and how to best protect your rights. Visit our attorney directory to find a lawyer near you who can help.

Or contact an attorney near you:
Copied to clipboard