Clear, intelligible speech using computer processing of human brain activity has been achieved by scientists for the first time.
Researchers at the Zuckerman Institute at Columbia University were able to reconstruct the words a person heard by monitoring their brain activity.
The breakthrough is an important step towards creating a brain-computer interface capable of reading the thoughts of people who are unable to communicate verbally.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastatting,” said Professor Nima Mesgarani, a principal investigator at Columbia University who led the study.
He added: “We have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
Prof Mesgarani and his team used artificial intelligence to recognise the patterns of activity that appear in someone’s brain when they listen to someone speak.
By making use of a similar computer algorithm to those found in smart assistants like Amazon’s Alexa and Apple’s Siri, the neuroengineers were able to synthesise speech from these brain patterns using a robotic voice.
The algorithm, called a vocoder, was taught using epilepsy patients treated by Dr Ashesh Dinesh Mehta at the Northwell Health Physician Partners Neuroscience Institute.