Comprehensive coverage

Read melodies from the brain with Pink Floyed

A new study, published in mid-August in the prestigious journal PLOS Biology, reveals a way to decipher the songs playing inside the brain. In doing so, he also provides an explanation about the way in which different areas of the brain perceive and respond to acoustic elements

Prof. Stephen Hawking and the computer he used for his mouth. Illustration: depositphotos.com
Prof. Stephen Hawking and the computer he used for his mouth. Illustration: depositphotos.com

Stephen Hawking is known as one of the most popular physicists of the twentieth century. He imparted his passion for understanding the universe to millions of teenagers and adults, wrote books and lectured at conferences - and all this despite the fact that in the last decades of his life, he was unable to move at all. In fact, he couldn't even speak.

One of Hawking's most memorable features was his mechanical 'voice'. Precise sensors detected the subtle movement of the single muscle that Hawking was able to move towards the end of his life - the cheek muscle - and translated the vibrations into letters and words. The computer converted the written words into a human-like voice. That voice was clear and easy to understand, but was far from pleasant to hear. It sounds like a robot from mid-XNUMXth century science fiction movies: metallic, toneless and devoid of emotion.[1].

Stephen Hawking's voice: artificial and robotic.

Although Hawking passed away in 2018, many people need similar "voice prostheses". In fact, it can be expected that due to the corona virus and the connection of many to breathing machines that can damage the throat and vocal cords, the number of people needing such prostheses will increase in the coming years. But they won't want a mechanical voice like Hawking's. Such robotic voices belong to the last century. They will want a more natural, flowing, much more human voice.

Fortunately, brain research suggests that we are beginning to understand how such voices are driven, processed and produced. But instead of focusing on the voice itself, the solution may actually come from studies that reveal how the brain processes melodies.    


More than another brick in the wall

For many years we have known that it is possible to implant electrodes in the brain, and through them to collect information about neural activity - the way in which the nerves communicate and transmit information to each other. The information is transferred to sophisticated algorithms that disassemble and reassemble it to understand how the brain assembles words, intentions and even thoughts[2][3]. It is not surprising to discover that music can also be scooped out of the brain in exactly the same ways.

A new study, published in mid-August in the prestigious journal PLOS Biology, reveals a way to decipher the songs playing inside the brain. In doing so, he also provides an explanation about the way in which different areas of the brain perceive and respond to acoustic elements.

To conduct these types of studies, electrodes usually have to be inserted deep into the skull - a complex, dangerous process that few are willing to undergo if they have no other choice. The researchers in this case found an unusual solution: they knew that during some treatments for epilepsy patients, electrodes are implanted on the cerebral cortex, and these collect information from the nerves inside the brain. The patient receives various stimuli from the environment, and his brain response is measured and recorded. The researcher can, for example, prick his finger gently, or ask him to think of a blue butterfly, and observe how the reading from the brain changes.

The researcher can also play songs to the patient and examine the brain response, and that is what they did.

Between 2009 and 2015, 29 experiments were conducted on epilepsy patients, during which they were played the song "Another Brick in the Wall" by Pink Floyd. The song was chosen for a reason that will surely please the fans: both the adults and the youngsters willingly agreed to hear it. From the moment the song was played and the brain information was captured and saved on the hard disk, it was available for future studies - like the one published recently.

The editors of the now published study received the information and carefully analyzed it to understand how the brain reacted to Pink Floyd. They discovered that different parts of the brain responded differently to the musical elements. The activity in a certain part of the brain, for example, increased when playing certain sounds, such as playing the guitar. Another area was more active precisely in response to human singing. A third area - in the right half of the brain - was of critical importance in deciphering music. Without the information extracted from it, the researchers were unable to reconstruct the song[4].

Eventually, armed with all the digital signals from the various brain regions, the researchers were able to develop a computer model that would reassemble the sounds. When they ran the model on the information obtained from the brain activity, they found that it produced a melody reminiscent of the original song - including sounds that sounded like real human singing. In principle, such a model could be used by researchers to extract melodies of all kinds from the brain.

Unfortunately, there is a great distance between principle and action.  


How a (computerized) song was born

The first problem becomes clear as soon as you hear the song produced by the brain. There is a resemblance to the original song, certainly, but this is far from a recording that can be played on the radio. I'll be nice and just say it sounds like it was recorded underwater. I mean, we've taken one step here towards recording melodies from the brain, but the whole journey is still far from perfect.

Why is the quality so low? There are probably several reasons. The model is certainly not successful enough, and is based only on information coming from areas of the brain dedicated to the study of epilepsy. To produce a more successful model, it will be necessary to place electrodes in more places in the brain. Smaller electrodes, placed at a higher density to obtain more information, would also help.

The second problem is that the computer model created by the researchers is based on only one case study: Pink Floyd's song. There is no more brick in the wall. There are no other songs he tried on. Anyone familiar with the process of creating computer models using artificial intelligence knows that it needs to be trained on as many samples as possible. For a model trained only on Pink Floyd, each new song will still sound like another brick in the wall. Obviously, the road is not easy, and the journey is not over yet.

But even before we reach the end of the journey - to a full understanding of how songs are perceived in the brain - we can already begin to reap the fruits for humanity.  


After all you are a song

I already described the first reason that such research can be useful at the beginning of the entry. He can help us develop "voice prostheses" for people who need them. That's an important use in its own right, but it's hard to shake the feeling that so much more could be done with insights into the way the brain processes melodies.

The more important reason for understanding the connection between the brain and notes, pleasantness and rhythms, is that music is an inextricable part of human culture. Every human civilization - from the largest empires to the smallest family-tribes - contains songs and melodies that everyone knows. One could almost say that the shared singing makes us a greater being than the sum of its parts. When the Maori warriors sang and danced together the "Hakka" - a dance that preceded every war between the tribes - they became a cohesive and cohesive force and left their fears behind. Even the priests in the cathedrals knew very well what they were doing when they ordered pleasantries for the organ from the best composers of the last centuries. The organ's melody, which penetrates the skin and muscles and resonates in the eardrum and bones, turned the stay in the church into a moving religious experience in which the person was exposed to his own nothingness - and at the same time, united with his believing brothers and sisters.

Music can also evoke strong emotions in us. It's enough to think about the first romantic songs we heard with my beloved and my beloved, to rekindle those feelings. Or in the songs of the Land of Israel, which bring many of us back to more innocent and happier years. In short, music is a thing It is important. If we know how the brain processes it, how it produces it in the winding corridors that connect the nerves and how it connects to emotions and encourages us to action, we can help many in a variety of ways. Maybe we can improve the psychological treatments. Maybe we can design tunes that will be carefully tailored to each brain and help its owner concentrate on the task at hand, or get a burst of energy or optimism. Maybe we can develop melodies that will touch every person's soul.

As always, when it comes to basic research, the road to implementation is long and long, but we have already started dancing along it. There are still many challenges on the way to achieving the goal, but one day we will be able to realize the enormous potential inherent in the connection between music and the human mind.

 Sources:

[1]https://www.wired.com/2015/01/intel-gave-stephen-hawking-voice/

[2]https://www.abstractsonline.com/pp8/#!/10619/presentation/64105

[3]https://www.nature.com/articles/s41593-023-01304-9.epdf

[4]https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002176

More of the topic in Hayadan: