Researchers have combined AI language models with fMRI to create a decoder that can reproduce the stories a person listened to or imagined telling in the scanner, with potential implications for brain-computer interfaces for people who can’t communicate outwardly.
Scientists have combined artificial intelligence language models with functional magnetic resonance imaging (fMRI) to create a decoder that can reproduce the stories a person listened to or imagined telling in the scanner. The technology has implications for people who can’t speak or communicate outwardly, such as stroke sufferers or people living with amyotrophic lateral sclerosis. Current brain-computer interfaces require implants in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without surgery. The decoder is in its infancy and requires extensive training for each person who uses it. Nevertheless, researchers now know that the AI language system can help make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans.