You are here
February 10, 2014
How the Brain Sorts Out Speech Sounds
By placing tiny sensors directly atop brain tissue, scientists were able to pinpoint sets of neurons that responded to particular sounds when patients listened to sentences. The finding offers insight into how our brains process heard words and may also provide clues to dyslexia, autism, and other language-related disorders.
Scientists have long recognized the brain areas that help us detect and make sense of the many sounds around us. The regions that respond to spoken words have been mapped using imaging techniques such as MRI and PET scans. But the details of how word sounds are decoded and processed at a finer scale have been poorly understood.
To take a closer look, a team led by Dr. Edward F. Chang at the University of California, San Francisco, studied 6 volunteers who were being assessed for brain surgery to treat severe epilepsy. Each patient had an array of more than 250 tiny electrodes placed on the brainās surface to detect faulty areas for surgical removal. The electrodes also allowed the scientists to measure how populations of brain cells react to distinctive speech sounds. The research was funded in part by NIHās National Institute on Deafness and Other Communication Disorders (NIDCD), National Institute of Neurological Disorders and Stroke (NINDS), and an Ā鶹“«Ć½ Directorās New Innovator Award.
The scientists analyzed neural activity while the volunteers listened to a series of 500 unique sentences spoken by 400 different men and women. The stream of words included all of the phonemes in the English language. Phonemes are the smallest units of sound that can change the meaning of a word, as in bad vs. dad. The study results appeared in the January 30, 2014, online edition of Science.
The researchers found that speech-responsive sites were centered in the superior temporal gyrus (STG), a brain region known to play a role in decoding speech. They then focused their analysis on these STG electrodes (37 to 102 sites per patient). Spoken sentences were broken down into sequences of phonemes that were time-matched to neural activity at each electrode.
The researchers determined that most of the STG electrodes were selective not to specific phonemes but to groups of even smaller units of sounds known as phonetic features. These features are related to the way sounds are made by the tongue, lips, or vocal cords. For instance, the bursts of air that produce consonants like p, b, and d are known as āplosiveā features.Ā Softer consonantsāsuch as s, z, or vācome from a directed stream of air and are categorized as āfricatives.ā
Analysis showed that some electrodes responded to all sounds with a particular type of phonetic feature. Others responded only to features with specific characteristicsāsuch as plosives with varying duration or vowels emanating from different parts of the throat.
āThese regions are spread out over the STG,ā says first author Dr. Nima Mesgarani, now at Columbia University. āAs a result, when we hear someone talk, different areas in the brain ālight upā as we hear the stream of different speech elements.ā
āThis is a very intriguing glimpse into speech processing,ā Chang adds. The findings might be relevant not only to the treatment of communication disorders but also to the development of devices that aid in the production or recognition of speech sounds.
āby Vicki Contie
Related Links
References: Mesgarani N, Cheung C, Johnson K, Chang EF. Science. 2014 Jan 30. [Epub ahead of print]. PMID: 24482117.
Funding: NIHās National Institute on Deafness and Other Communication Disorders (NIDCD), National Institute of Neurological Disorders and Stroke (NINDS), Ā鶹“«Ć½ Directorās New Innovator Award; and the Ester A. and Joseph Klingenstein Foundation.