Skip to main content
Please enter a legal issue and/or a location
Begin typing to search, use arrow keys to navigate, use enter to select

Find a Lawyer

More Options

The Final Frontier for AI: Not Discriminating

By George Khoury, Esq. on August 10, 2017 | Last updated on March 21, 2019

While many science fiction fantasies may posit that when AI takes over the world, the type of discrimination we know today will no longer exist because the robot overlords just won't care about color, gender, or origin ... or, will have just killed us all off indiscriminately.

Even though AI and data science are advancing rapidly, systemic discrimination permeates everything. Implicit bias is real, and is being passed on to the algorithms and artificially intelligent machines that may soon dictate and predict our very lives.

A contest run by National Institute of Justice (NIJ) sought to help develop technology to predict when and where crime will occur. However, as pointed out by the brilliant minds at Lawyerist.com, there's a bit of a problem inherent in the system: crime prediction and forecasting just isn't fair.

AI Learns From Humans

The NIJ contest entrants were tasked with creating crime forecasting models for the city of Portland. The problem with doing this lies in the data that's used to predict future criminal activity. Due to the inherent discrimination in the policies that guide law enforcement, the most heavily relied upon data is flawed and biased. Essentially, because of the data being relied upon, AI will tend to pick up society's implicit biases. And while implicit bias can potentially be programmed out, or around, when it comes to certain data sets and uses, AI will only be as good as the data it gets.

The whole "I learned it from watching you!" -thing can be applied to AI, as the data these machines learn, if not created by humans, measure things related to humans.

Who's Liable for Robot Discrimination?

One of the hallmarks of AI is self-learning. Programmers don't provide data, but rather write programs that allow the AI to source their own data, either from the internet, or their own sensory inputs. As such, we're left with the following unanswered questions:

  • If an AI reaches a discriminatory conclusion and takes a discriminatory action that would be illegal, is the robot, robot's owner, or the programmer of the AI, going to be liable?
  • Can the provider of bad data be liable?
  • What if the bad data was just negligently provided?

Related Resources:

You Don’t Have To Solve This on Your Own – Get a Lawyer’s Help

Meeting with a lawyer can help you understand your options and how to best protect your rights. Visit our attorney directory to find a lawyer near you who can help.

Or contact an attorney near you:
Copied to clipboard