The brain does not see the semantic difference between oral and written language

The semantic representation of the text in the brain is not dependent on reading or listening to it. It found American scientists, which asked participants their fMRI experiment to read and listen to the same story and then compared the activity of their brains: it was found that regardless of the modality of the semantic map of the brain looks about the same. The results of the study reported in The Journal of Neuroscience.

For understanding speech, the person can use two different primary channel of information processing: vision and hearing. Of course, these two ways work differently just because of the used modality, the same information for further processing by the brain behaves differently and requires different involvement of cognitive processes. Accordingly, it is possible to assume that perceived (from the point of view of semantics) such information in different ways.

Most of the research activity of brain areas involved in semantic processing of speech, however, is dedicated to only one modality. That is why on how similar the semantic processing of speech in different channels, not much is known. To examine this issue more decided by scientists from the University of California at Berkeley under the leadership of Fatma Deniz (Deniz Fatma). Their study involved nine people, each of them had to first listen and then read a short (about 10 minutes) excerpts from the podcast The Moth, devoted to storytelling ability. For problem with reading the story appeared on the screen word by word, with the same speed with which words are pronounced in the audio: this was done in order to limit the time frame of speech processing and in fact, in another case. While reading and listening to stories, the brain activity of participants was recorded using fMRI.

From the obtained during the experiment, brain activity, scientists have made a semantic map methodology, which was used in their research previously. Such a map is constructed as follows. Brain activity is analyzed poloxalene, and the activity of each cubic millimeter of neocortex correlate with those words that have been heard or read at any given moment. Just to build such a map, it’s 985 words from the list of 1000 most frequent lemmas of English: each of them corresponds in the time frame of appearance and frequency of words that appear in used in the study texts. All words fall into one of eleven semantic categories, representation of which is reflected in the activity of the cerebral cortex. Among the categories of words associated with numbers, time, places, mental States, movements, body parts and so on. Based on the analysis the researchers built a regression model that predicts the activity of a certain area of the cortex on the basis of the semantic field, which falls in a particular word.

As expected, the obtained semantic maps covered large areas of the cortex head out hemispheres and speech centers: activity was observed in the lateral part of the prefrontal cortex and parietal cortex, ventral temporal cortex and the occipital cortex. Models are built for each modality separately, are well predicted by the distribution of activity on the semantic map of the brain: and for each participant separately, and in General. Interestingly, the model based on activity when reading, is also well predicted activity when listening to and Vice versa. The correlation coefficient when comparing activity in the two modalities by the principal component analysis was made of 0.75.

Leave a Reply

Your email address will not be published.