Artificial intelligence can map how a complex brain network works to understand sentences

It’s amazing how brains work and how they understand the sentences that are being said, since the same words in that sentence could have a completely different meaning when rearranged . Using artificial intelligence (AI) and neuroimaging, researchers mapped the complex network that allows brains to understand speech sentences.

The team from the University of Rochester Medical Center reported in their research titled “Deep artificial neural networks reveal a diffuse cortical network encoding the meaning of a tender sentence level,” as people would understand the sentences they hear.

The study introduces a major scientific breakdown in understanding how and to which part of the brain the meaning of a sentence level is constructed.

    Using AI to Map how a Complex Brain Network works to understand sentences

(Image: Pixabay)
Using AI to Map how a Complex Brain Network works to understand sentences

More than words

Rochester Del Monte University Institute for Research assistant professor for Neuroscience Andrew Anderson, Ph.D., and lead author of the study, said it is not known whether the integration of meaning in the sentences occurs in a specific part of the brain, such as the lobes temporal anterior, or if a more complex neural network is involved.

The study, published in the Iris Neuroscience, suggesting that the meaning of a sentence is greater than the words spoken. That means that a sentence is greater than the sum of its parts.

For example, Devdiscourse reported that the sentence “run the car over the cat” and “run the car over the car” have the same set of words but could mean different when words are rearranged.

CENTRAL RESPONSE: A new study suggests that language began to form 30 to 40 million years ago

Opens a highly complex natural signal

The researchers used AI and neuroimaging to solve highly complex cloud signals in the brain that are dependent on language processing, according to Science Daily.

They collected data from study participants who went through fMRI while reading sentences. Their findings showed that brain activity transcended neural networks in several regions of the brain, such as anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex.

The InferSent AI module, developed by Facebook that produces unified semantic representations of sentences, was used to predict patterns of fMRI activity across those emerging brain regions. in which the participants read sentences.

“This is the first time we have applied this model to predict brain activity within these regions, and this provides new evidence that contextual semantic representations are coded on across a distributed language network, rather than at a single site in the brain, “the researchers said.

The team believes their findings could help understand medical conditions, such as Alzheimer’s disease. The researchers are developing similar methods aimed at understanding how patients’ ability to understand languages ​​breaks down in the early stages of Alzheimer’s disease.

In addition, the researchers said they are interested in developing their models in the future to incorporate them into something that could predict brain activity as a language is introduced to result.

“The current study had people reading sentences, in the future we are interested in moving forward to predict brain activity because people may speak sentences,” Anderson said.

LOVE ARTICLE: Scientists have created a wearable device that prevents Alzheimer’s disease

Check out more Science and Brain news and information on Science Times.

.Source