The big picture: A team from the University of California, Davis, has announced a breakthrough brain-computer-interface (BCI) capable of translating brain signals into readable text with an extremely high level of accuracy. The recently published findings detail the breakthrough technology, which can predict the user's desired words with a less than 3 percent error rate. Researchers reviewing the findings claim the communication tech's error rate is on par with that of a non-disabled speaker reading a paragraph out loud.

The study's results, which were published on August 14 in the New England Journal of Medicine, outlines the incredible progress made by Casey Harrell, a 45-year-old father who was stricken with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's Disease. According to the study, Harrell's symptoms began five years prior to his enrollment in the study.

The disease rendered Harrell unable to effectively communicate with his family, friends, and others around him. To put his challenges in perspective, the average human communicates using conversational English at a spoken rate of approximately 160 words per minute. According to speech analysts working with Harrell, the disease left him with the ability to communicate at an average just over a mean (±SD) rate of 6.8±5.6 correct words per minute. But within one month of a procedure to implant new microelectrode arrays in his brain, Harrell's ability to communicate clearly and at will with his family and daughter were restored with amazing accuracy.

The signal-processing system, which uses BlackRock Neurotech's NeuroPort system, sent brain signals from the 3.2mm electrodes implanted in Harrell's brain to computers running the publicly available research software, the Backend for Realtime Asynchronous Neural Decoding (BRAND) platform. The information received was then used to process and decode neurological signals in real time, resulting in Harrell's ability to communicate in clear, accurate sentences and phrases.

The experiment was highly successful from the start. During the first speech session, the team asked Harrell to speak pre-defined sentences using a sample 50 word vocabulary. The implanted neuroprosthesis decoded Harrell's attempted speech with 99.6% accuracy. A second session using the same 50 word vocabulary resulted in all attempted sentences being decoded correctly, yielding an impressive word error rate of 0%.

Later sessions expanded the test vocabulary of the neuroprosthesis from 50 words to more than 125,000 words, which encompasses the majority of the spoken English language. After several hours of training, the neuroprosthesis decoded the participant's attempted speech with 90.2% accuracy. Test performance continued improving over the next several months, resulting in a sustained translation accuracy of more than 97.5%.

The study was presented by a team led by UC Davis neuroscientist Sergey Stavisky and neurosurgeon David Brandman. While Harrell is the first to test the new neuroprosthesis and interface technology, the results look to be an extremely promising step in the fight to restore communication for disabled persons.

According to Stavisky, "the first time we tried the system, he (Harrell) cried with joy as the words he was trying to say correctly appeared on-screen. We all did."