When Casey Harrell's fingers stopped obeying him, he lost more than the ability to hold a coffee cup or button his shirt. He lost his voice. ALS had paralyzed the muscles he needed to speak, trapping his thoughts inside a body that could no longer express them. Then, in July 2023, surgeons at UC Davis Health implanted four tiny electrode arrays into his brain. Twenty-five days later, words he thought appeared on a screen in front of him. He cried. So did everyone in the room.
The 30-Minute Miracle
The speed of Harrell's success caught even the researchers off guard. After the surgical site healed, the team spent just 30 minutes calibrating the system with a 50-word vocabulary. Harrell achieved 99.6% accuracy immediately. Within 1.4 hours of additional training, the system expanded to recognize 125,000 words—essentially the full breadth of conversational English—with 90.2% accuracy.
This wasn't a laboratory curiosity that worked once under perfect conditions. Harrell used the device for 248 hours across 84 sessions over eight months. The accuracy held at 97.5%. He typed at about 32 words per minute, roughly half the speed of typical speech but fast enough for genuine conversation. The system even synthesized his pre-ALS voice from old audio recordings, so the words on screen could be spoken aloud in a voice his family recognized.
The technology relies on 256 electrodes implanted in Harrell's left precentral gyrus, a brain region that coordinates speech. The electrodes don't read thoughts in some mystical sense. They detect the neural patterns that would normally trigger his vocal muscles to form specific words. When Harrell tries to say "hello," his brain fires in a distinctive pattern. The computer learns to recognize that pattern and translates it into text.
From Decoding to Streaming
What makes recent advances different from earlier attempts isn't just accuracy—it's speed. A 2021 Stanford study achieved impressive results when a paralyzed patient imagined handwriting letters, reaching 90 characters per minute. But that same year, UCSF's speech decoder managed only 15 words per minute with a 50-word vocabulary.
By March 2025, the UCSF team had cracked the streaming problem. Working with a 47-year-old woman who hadn't spoken in 18 years following a stroke, researchers developed a system that decodes speech in 80-millisecond increments—less than a quarter of a second. The patient trained by attempting 23,000 silent speech efforts across 12,000 sentences. The result: 47.5 words per minute with a full vocabulary, or 90.9 words per minute when limited to 50 common words. The system decoded and synthesized speech with 99% accuracy within that 80-millisecond window.
Dr. Gopala Anumanchipalli, who led the UC Berkeley portion of the research, compared it to voice assistants: "Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses." The comparison matters because conversation requires rhythm. Pauses that stretch too long feel unnatural. Responses that arrive seconds late kill the flow of dialogue.
The Training Problem Nobody Talks About
The dramatic improvement from 15 words per minute in 2021 to nearly 50 in 2025 didn't come from better electrodes or more powerful computers. It came from better training methods. Early systems required participants to spend months building up the neural database—one 2021 study involved 48 sessions over 1.5 years, recording 22 hours of data before achieving usable results.
Newer approaches flip the script. Instead of exhaustive upfront training, they use machine learning algorithms that adapt quickly to individual neural patterns. Harrell's system reached 90% accuracy with a massive vocabulary in less than two hours of training. The woman in the 2025 UCSF study needed thousands of practice attempts, but the system improved continuously throughout, rather than requiring a long training period before becoming functional.
This matters for a simple reason: paralyzed patients are often sick. They tire easily. They may not have years to train a system. The faster a brain-computer interface becomes useful, the more patients can benefit.
What 97% Accuracy Actually Means
Harrell's 97% accuracy sounds nearly perfect. In practice, it means roughly one error every 30 words—a typo every other sentence. For comparison, the threshold for usable communication is generally considered to be 70% accuracy, or about 30% error rate. Anything worse than that requires so much correction that conversation becomes exhausting.
But context matters. When Harrell types "I want to go outside," the system might occasionally produce "I want to go outside." The meaning survives most errors. Autocorrect and predictive text help fill gaps. The system learns which words Harrell uses frequently and prioritizes them.
Still, 97% isn't 100%. For medical communication—"The pain is in my left leg, not my right"—small errors carry weight. For expressing love to family members or arguing about politics, the imperfection matters less. Users report that having imperfect communication beats having no voice at all.
Surgery, Stability, and the Long Game
All these systems require brain surgery. Surgeons must implant electrode arrays directly onto or into the cortex. The UC Davis arrays sit on the brain's surface. Stanford's handwriting decoder penetrates slightly deeper to record from individual neurons. Each approach carries risks: infection, bleeding, scarring.
The question researchers track obsessively is stability. Do the electrodes keep working? Does the brain tissue around them stay healthy? Do the neural patterns remain consistent, or does the system need constant recalibration?
So far, the news is encouraging. Harrell's system maintained accuracy for eight months. The 2021 UCSF study tracked brain activity patterns for 81 weeks and found them stable. The electrodes don't appear to damage surrounding tissue in ways that degrade performance.
But eight months isn't ten years. Nobody knows yet whether these implants will function for decades or require replacement. Nobody knows if the brain will eventually route around the electrodes, changing its firing patterns in ways that break the decoder. These questions will take time to answer.
When Typing Isn't Enough
The patients in these studies can't move their arms or legs. They can't feed themselves or adjust their position in bed. A brain-computer interface that lets them type is life-changing, but it doesn't restore mobility. It gives them a voice, not a body.
That's why some researchers are working on different targets. Instead of decoding intended speech, they're trying to decode intended movement—signals that could control a robotic arm or wheelchair. Others are developing interfaces that bypass the brain entirely, using nerve signals from the spinal cord or residual muscle activity.
The speech-based systems have one advantage: they're trying to decode something the brain already knows how to do. Harrell's brain spent 45 years learning to talk. The neural patterns for speech are deeply ingrained, stable, and distinct. Teaching someone to control a robotic limb with their thoughts means creating entirely new neural patterns, which takes longer and may never feel natural.
The Voice You Remember
When Harrell's system speaks aloud, it uses his original voice—the one his wife and children remember from before ALS stole it. The researchers synthesized it from old recordings. This detail might seem cosmetic, but patients report it matters profoundly. The voice is part of identity. Hearing your own voice, even generated by a computer, feels different than hearing a generic synthetic speaker.
It's a reminder that these systems aren't just about data transmission. They're about restoring personhood. When you can't speak, people stop asking your opinion. They talk over you, about you, as if you're not there. A brain-computer interface that lets you interrupt, argue, joke, and participate doesn't just give you communication. It gives you back your place in the conversation.