Comprehensive coverage

Technion researchers have identified a structured code by which neurons in the human brain represent speech movements

The researchers were able to decipher syllables directly from the neural activity that causes them to be pronounced - a finding that may allow patients paralyzed in all parts of their bodies to "talk" with those around them through a direct interface between brain and computer

In the picture: the two language areas, where cell responses were studied during speech. The graphs show a selective code for movements in the area of ​​the frontal lobe and a non-selective code in the area of ​​the temporal lobe (each color represents a neuron).
In the picture: the two language areas, where cell responses were studied during speech. The graphs show a selective code for movements in the area of ​​the frontal lobe and a non-selective code in the area of ​​the temporal lobe (each color represents a neuron). Copyright (C) Ariel Tankus, 2012.

The Technion researchers were able to decode syllables directly from the neural activity that causes them to be pronounced - a finding that may allow patients paralyzed in all parts of their bodies to "talk" with those around them through a direct interface between brain and computer. Professor Shay Shoham and Dr. Ariel Tankos from the Faculty of Biomedical Engineering, in collaboration with Prof. Yitzhak Fried from the University of California, Los Angeles (UCLA), describe in a new study published in the scientific journal Nature Communications the way in which neurons in different regions of the human brain are coded different speech-segments (vowels) during their pronunciation. The discovery made it possible, indirectly, to decode the speech content of the subjects based on brain activity alone. One of the possible applications of decoding speech from brain activity is in the creation of a brain-computer interface, which can restore the ability to speak to paralyzed people who have lost it.

"There are diseases in which the patient is paralyzed in all parts of his body, literally 'locked' in it (locked-in syndrome) and cannot communicate with the environment, but his brain is still functioning," explains Professor Shoham, head of the Laboratory for Neural Interfaces in the Faculty of Biomedical Engineering at the Technion. "Our long-term goal is to restore the ability to speak to these patients through systems that include implanting electrodes in their brains, decoding the neural activity that encodes speech, and artificially reproducing speech sounds. For this purpose, we first wanted to understand how the information about the driving syllable is encoded in the electrical activity of a single nerve cell in the brain and of a population of cells. In the experiments we conducted, we were able to discover cell populations that participate in representation. For example, cells we recorded in the area in the center of the frontal lobe, which includes the Anterior Cingulate Cortex, really surprised us in the way they represented certain movements and not others, even though the area is not recognized as having a central role in the process of creating speech."

The experiments were conducted at the Medical Center of the University of California in Los Angeles with the participation of epileptic patients, in whose brains depth electrodes were implanted by Prof. Fried and his team. The purpose of the transplant is to locate the epileptic focus, which is the area in the brain from which the epileptic seizures begin. After the operation, the patients are hospitalized for a week or two with the electrodes in their brain, waiting for spontaneous seizures to appear which will be recorded by the electrodes. During this period of time, Dr. Tankos, who was a postdoctoral student at UCLA and is now a researcher at the Technion, conducted experiments in which he asked the patients to pronounce movements and syllables consisting of a consonant and a movement, and in the process recorded the activity of the nerve cells in their brains. The researchers discovered two populations of neurons, each of which encodes the information about the driver's movement in a completely different way. In the first population identified in the center of the frontal lobe, each cell encodes only one or two movements by changing its firing rate, but does not change its activity when other movements are driving. On the other hand, in the second population, located in the Superior Temporal Gyrus region, each cell responds to all the movements tested, but the strength of the response (of the cell) varies from movement to movement. Furthermore, the researchers were able to deduce a mathematical arrangement of the way in which the vowels are represented in the brain, and showed that it corresponds to the trapezium of vowels, built according to the position of the highest point of the tongue during pronunciation. In this way, the researchers were able to link the cerebral representation with the anatomy and physiology of pronunciation-vowels.

As mentioned, understanding the brain representation of speech production is also a significant step on the way to deciphering the cellular activity using the computer, as Dr. Tankos says: "We developed a new algorithm that greatly improved the ability to identify from the brain activity which syllable was used, and with it we reached very high recognition rates. Based on the current findings, we are now conducting experiments to create a brain-machine interface to restore the ability to speak to humans.

6 תגובות

  1. And in the future children will copy on the test without realizing it, and the teacher will have no idea... and more seriously - soldiers will communicate with each other in ambush with an earpiece in their ear but without covering their mouth. And it will be interesting, because a) they will have to learn to "almost speak" but without making up their mouths, and b) they will have to know how to control the inner voice that almost everyone has (who doesn't speak in their imagination)

  2. Who, it's completely clear to me that it's only a matter of time until real mind reading becomes possible and I'm talking about only a few decades, we've already seen so many advances in this field in recent years (even retrieving images directly from the brain, activating bionic arms using only thought, and now that) the direction is quite clear. I really hope there will be very strict restrictions on the use of this technology.

    Think, for example, of a scenario in the not-so-distant future in which every person passing through the terminal on their way to a plane (or to a spaceship, perhaps on their way to one of the colonies we have established on the moon) will be asked as part of the security check to wear a helmet full of sensors that will scan their mind, all their thoughts and memories to make sure they have no malicious intent.

    This will be the absolute end of personal privacy, and I very much hope that we will not reach such days.

    I have no doubt that the technology itself will be available in our lifetime.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.