Skip to main content
Find a Lawyer
Enter information in one or both fields. (Required)
Begin typing to search, use arrow keys to navigate, use enter to select

Find a Lawyer

More Options

Neurotechnology: The Next Legal Battleground for Privacy

By Vaidehi Mehta, Esq. | Last updated on

If you feel our world increasingly resembles a science fiction story, you aren't alone. Major players in the tech world are developing cutting-edge neurotechnology, for example. With potential applications ranging from medical diagnostics to augmented reality, this technology promises to unlock new capabilities for the human brain. However, as these advancements progress, they raise critical questions about data privacy, ethical usage, and the need for regulatory oversight to safeguard our most personal information — our thoughts.

The Big Boys of BCI

While it sounds far-fetched, this is not the realm of the Tyrell Corporation from Blade Runner. Companies and CEOs you already know are researching these technologies.

For example, Elon Musk founded the neurotechnology company Neuralink in 2016. Musk’s goals were at least in part inspired by a big name in science fiction literature — Iain M. Banks and his concept of “neural lace” from his series The Culture. It inspired Musk to use his company to attempt to develop a brain-computer interface (BCI, sometimes called a brain-machine interface or “smartbrain”) that is implanted into a human blood vessel and grows within the body to achieve “symbiosis with artificial intelligence.”

Mark Zuckerberg never likes to be left out of the tech game, so Meta (the parent corporation of Facebook and Instagram) also have a horse in the BCI race. They’re taking a different approach than Neuralink, focusing instead on non-invasive methods, meaning they aren't implanting devices directly into the brain. Meta's BCI research seems geared towards creating interfaces for augmented reality (AR) headsets.

MEGa Impressive?

Their prototypes use wristbands or other non-intrusive methods to capture signals related to muscle movements or brain activity. One research area involves using AI to interpret brain scans (magnetoencephalography or MEG scans) to reconstruct the images a person is seeing. MEG is a cutting-edge brain scanning technique that captures the tiny magnetic fields created by neurons firing in the brain. It's a big deal for figuring out how different parts of the brain work, especially when it comes to understanding and treating brain conditions.

It's also super quick at picking up signals, which means it can track how the brain reacts to stuff in real-time faster than other methods like fMRI. Plus, it's non-invasive, so it's a lot less hassle for patients. Researchers are even working on making it portable, which could make it a lot easier to use in the future.

By the end of last year, Meta’s developers had essentially created AI that can figure out what someone is trying to say just by looking at their brain waves. It did this by training a machine using brain wave data from people who listened to stories and sentences. It used this mass of data as a sort of Rosetta stone to learn the "language" of brain activity. It then makes a dictionary by decoding and "translating" brain signals into words and ideas in the English language. This can then be used by the AI to "read" people's thoughts and generate text that corresponds to them.

The AI has been shown to be accurate at predicting people's thoughts about 73% of the time after checking out brain activity for just three seconds. Usually, doctors have to put special equipment inside people's heads to read their brain signals clearly, but that's risky. This new AI can do it from the outside, which is way safer.

Meta has demonstrated how their BCI system can decode user intentions in real-time. For instance, predicting what a person is hearing or using wrist movements to navigate an AR interface. There are some kinks to work out, like the brain waves being messy and different for everyone, but this AI is a big step toward helping people communicate again without having to go through surgery. The technology is still in early stages, but it shows that one of the biggest tech companies out there has a supreme interest in understanding information from the human brain.

Applications: The Good, the Bad, and the Sci-Fi

There are many ways such technology could be a boon to humanity. If it gets up and running, it could be a game-changer for people who've suffered from traumatic brain injuries and lost the ability to talk or communicate effectively.

Potential for Improved Quality of Life

Using noninvasive brain scans, AI can figure out what they're trying to say by decoding their brain waves into words. For folks who've had a brain injury or suffer from epilepsy, MEG can help doctors see exactly where things are going wrong without surgery. Imagine giving a voice back to millions who can't speak — that's what the BCI research of companies like Neuralink and Meta purportedly aim to do. By using this tech, people who've been locked in their own minds could start chatting away, sharing their thoughts, and reconnecting with the world.

BCIs can also be unparalleled tools for doctors to figure out health problems. They can measure how tired someone is, spot signs of depression, and check stress levels. As mentioned above, BCIs are especially useful when people can't tell doctors what's wrong, like when they have locked-in syndrome (where they're awake but can't move or talk). There's also research going on to use BCIs to understand more about diseases that get worse over time, like glaucoma, by tracking how far along the disease is.

Other medical uses for BCIs could include using the AI to modulate a patient’s nervous system and disrupt seizures for epileptics. Vision-impaired or blind people may be able to use an AI-backed implant that bypasses the eyes and taps directly into the brain to provide rudimentary vision. BCI could be used to make prosthetic limbs much easier to control by reading the intentions of the user.

In short, the benefits in the medical field alone are boundless. And then, of course, there are a bunch of ways that the technology could enhance user experiences of products and the lifestyles of everyone else by connecting to the Internet of Things. But such a futuristic world comes with a price, and what seems like a techno utopia may not be so appealing upon further inspection.

Where Is the Data Coming From?

For one, it’s important to consider how companies doing R&D on BCI are getting their data. Sure, some people voluntarily participate in studies and clinical trials. In those cases, especially in the medical setting, their data is protected information regulated by federal health laws. Health BCIs that are part of research involving people must follow strict rules. If the U.S. government pays for the research, the scientists must get an okay from a special board that checks everything is done right and safely. They also have to ensure that the people taking part in the study know what's going on and agree to it — that's what informed consent means.

But the kind of technology we’re talking about needs a lot of data points — a lot more than they can gain with explicit permission. It’s already the case that pretty much every website you go on is tracking your data for its own purposes, and this will only be exacerbated if brain data has the potential to be used in all manner of commercial applications.

Privacy and Accuracy Concerns

BCIs hold immense potential, but their development also raises significant privacy concerns. Since this technology has the potential to decode a wide range of brain activity, including thoughts, emotions, and memories, it raises concerns about unauthorized access to this private information. This technology may infer personal details like hunger or preferences beyond the intended commands to a device, like steering a wheelchair. The integration of these inferences with other data could affect someone's life without their consent.

There's also a risk that BCI-collected data might be used for unintended purposes, revealing sensitive information to others, including companies or governments, underscoring the need for strict data privacy measures. Malicious actors could exploit vulnerabilities to steal sensitive data or even manipulate a user's thoughts through the BCI, hard as it is to believe.

Finally, it’s important to remember that even artificial “intelligence” isn’t all-knowing or perfect. The accuracy of BCIs can be particularly precarious in healthcare settings where mistakes can be life-threatening. For patients using BCIs to control prosthetics or manage conditions like epilepsy, any failure in accurately reading and responding to brain signals could have severe consequences, including the risk of death. Accurate communication through BCIs is also vital for patients to make important health decisions. Ensuring these devices work flawlessly is essential for both safety and effective operation.

Regulations to Come: Colorado

States may soon respond to the rapid advancement in neurotechnology that could potentially exploit mental data for profit. Colorado’s governor just last week passed a pioneering law to protect personal brainwave data.

This law, the first of its kind in the U.S., aims to prevent using individuals' brain data without consent, especially from consumer products that currently lack regulatory oversight. It does so by expanding the statutory definition of “sensitive data” in Colorado's existing personal privacy law to include biological data. Relevant to BCI technology, this includes “neural data” generated by the brain the rest of the nervous system.

While existing privacy laws cover medical use of neurotech, this legislation targets the consumer sector, where companies like Meta and Neuralink are developing technologies to harness brain activity for commercial purposes. The Neurorights Foundation, which advocates for ethical neurotech development, supported the bill and highlighted the current gaps in data privacy protections within the industry.

As lawmakers like those in Colorado begin to navigate these uncharted waters, the hope is to balance innovation with the protection of our most intimate data, ensuring a future where technology serves humanity without compromising our fundamental rights. For now, most of us can still keep our thoughts to ourselves.

Was this helpful?

You Don’t Have To Solve This on Your Own – Get a Lawyer’s Help

Meeting with a lawyer can help you understand your options and how to best protect your rights. Visit our attorney directory to find a lawyer near you who can help.

Or contact an attorney near you:
Copied to clipboard