Science

AI decodes brain signals into speech and images in scientific breakthrough

Navigation

Ask Onix

AI translates thoughts into text for paralyzed patients

A 52-year-old woman, paralyzed by a stroke 19 years ago, communicated through a brain-computer interface (BCI) that converted her imagined speech into text on a screen. The system, developed by Stanford University researchers, marked a significant step toward decoding inner monologues.

The participant, identified as T16, had electrodes surgically implanted in her brain to capture neural signals. An AI algorithm interpreted these signals, translating them into sentences she could not vocalize. Three other patients with amyotrophic lateral sclerosis (ALS) also tested the technology.

From movement to speech: The evolution of BCIs

Brain-computer interfaces have advanced rapidly since their inception in the 1960s. Early experiments demonstrated monkeys controlling meters with neural activity and scientists halting a charging bull through brain stimulation. However, decoding complex thoughts like speech proved more challenging.

In 2021, Stanford researchers enabled a quadriplegic man to write sentences by imagining drawing letters, achieving 18 words per minute. By 2024, a team at the University of California, Davis, decoded attempted speech from a 45-year-old ALS patient at 32 words per minute with 97.5% accuracy, showcasing the potential for everyday communication.

Inner speech and emotional expression

Recent studies explored whether BCIs could capture inner speech-thoughts not vocalized. Stanford researchers tested this by asking participants to count shapes mentally. The system achieved up to 74% accuracy in decoding imagined sentences but struggled with open-ended prompts like favorite movie quotes.

"With the current technology, we're not able to get somebody's fully unfiltered inner speech perfectly accurately. But we were able to pick up traces of inner speech pretty clearly in these different tasks."

Frank Willett, Co-Director, Neural Prosthetics Translational Laboratory, Stanford University

The study revealed that inner speech signals in the motor cortex were weaker but similar to those of attempted speech, aligning with previous neuroimaging findings.

Beyond words: Decoding tone and emotion

In 2025, researchers at the University of California, Davis, took BCIs further by decoding non-verbal speech elements like intonation, pitch, and rhythm. This allowed an ALS patient to convey emphasis and emotion, such as asking a question with rising inflection or singing melodies.

"Human speech is much more than text on the screen. Most of our communication comes through how we speak, how we express ourselves."

Maitreyee Wairagkar, Neuroengineer, University of California, Davis

While only 60% of the words were intelligible, the breakthrough demonstrated the potential for more natural communication.

Reconstructing images and sounds from brain activity

Parallel advancements have enabled scientists to recreate images and sounds from brain scans. Researchers in Japan used AI to generate descriptions of what participants visualized, combining non-invasive scans with machine learning. Similarly, studies in Israel and Japan reconstructed images viewed by individuals using fMRI data and AI image generators like Stable Diffusion.

Yu Takagi, an associate professor at Nagoya Institute of Technology, explained that the brain processes visual information in two key regions: the occipital lobe (for layout and color) and the temporal lobe (for object recognition). His team also reconstructed music from fMRI scans, though with lower accuracy due to the dynamic nature of sound.

Future applications and ethical considerations

Researchers envision BCIs assisting stroke victims, psychiatric patients, and even enabling brain-to-brain communication. However, ethical concerns and technical limitations remain. Takagi noted that stimulating visual or auditory experiences for entertainment is unlikely within the next decade.

"Many people are asking about [recreating dreams]. He says he would like to recreate dreams one day, but right now, it remains extremely complicated."

Yu Takagi, Associate Professor, Nagoya Institute of Technology

Neuroengineers like Wairagkar and Willett anticipate rapid progress, with improved electrode arrays and AI algorithms enhancing accuracy and expanding applications.

Commercialization on the horizon

Companies like Neuralink are already developing commercial brain chips to bring BCI technology to the public. Wairagkar predicts widespread deployment in the coming years, transforming how humans interact with technology and each other.

"In the next few years, we will begin to see these technologies being commercialised and deployed at scale. It's very exciting."

Maitreyee Wairagkar

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed