Prosthetic Speech Implant Turns Your Thoughts to Words

Illustration for article titled Prosthetic Speech Implant Turns Your Thoughts to Words

Imagine waking up one morning and being unable to speak. Your mind still churns away, trying to form words, but no sounds will come out. It's like the bleak ending of Harlan Ellison's I Have No Mouth and I Must Scream only, you know, real. This is a fact of life for many people with varying levels of paralysis, who have lost the ability to control their vocal chords, lips, and tongue. But an experimental brain implant promises to change their lives.


People who have lost their ability to speak still have active speech centers in their brain. Seeking to tap into the neurons firing in those centers, researchers at Boston University implanted a series of electrodes into the brain of Erik Ramsey, a man who has been in a locked in state since a brain stem injury when he was 16.

Of course, the electrodes are only there to pick up neuronal activity, so the researchers have had to come up with complex software to decode the raw signals into speech - in other words, to translate Ramsey's thoughts about speaking into actual sounds:

The software is designed to translate neural activity into what are known as formant frequencies, the resonant frequencies of the vocal tract. For example, if your mouth is open wide and your tongue is pressed to the base of the mouth, a certain sound frequency is created as air flows through, based on the position of the vocal musculature. Different muscle positioning creates a different frequency. Guenther trained the computer to recognize patterns of neural signals linked to specific movements of the mouth, jaw, and lips. He then translated these signals into the correlating sound frequencies and programmed a sound synthesizer to project these frequencies back out through a speaker in audio form.

So far, the technique's worked, albeit slowly. With a lot of concentration, Ramsey has been able to get the system to make all of the vowel sounds in the English language. But there are only a handful of those - the next stop is the hundreds of consonants, which could take years and a new, more sophisticated implant that can better understand what it is Ramsey is trying to say.

Source: Technology Review

Image, University of Pennsylvania



It'll probably quite a while for this system to get of all the phonemes down. My little meatbrain is boggled by the fact that all those subtle little movements my mouth makes to pronounce the differences between v, f or th might be read by a machine. Wow.

Hey, when they perfect this can Mr. Ramsey choose any voice to do the synthesis? I'd opt for Gregory Peck or James Mason.