A recent study has explained how an artificial intelligence can understand brain signals and reconstruct the music a person is listening to. Now, in an article published in the scientific journal Public Library of Science (PLOS) Biology, The scientists reconstructed a version of a song by the British band Pink Floyd based on the brain activity of the participants.
Researchers explain what they’ve accomplished Recreate a version of the song “Another Brick in the Wall, Part 1”, by collecting data on brain activity from the cerebral lobe while participants were listening to music. The researchers claim that the song is very similar to the original version, but with ‘robotic overtones’.
According to Ludovic Bellier, one of the study’s main authors and a neuroscientist at the University of California in the United States, music reconstructed from cortical activities of the brain. However, it’s important to note that the scientists used some kind of initial processing in the form of a spectrogram to create the version.
“We’ve reconstructed the classic Pink Floyd song Another Brick in the Wall directly from human cortical recordings, providing insight into the neural basis of music perception and future brain decoding applications. Right now, technology is more like a keyboard for the mind. You can’t read your thoughts on a keyboard. And it sounds kind of robotic—there’s definitely less of what I call freedom of speech,” Bellier said.
Brainwaves and Pink Floyd
For the study, the researchers used data collected from 29 participants who placed electrodes in their brains to control their epilepsy; The novices analyzed a total of 2,668 electrodes while listening to Pink Floyd music. From these data, A machine learning algorithm was used to correlate the music and the brain activity of the participants.
According to the study, the scientists’ motivation is to understand how brain patterns behave during music reproduction and to use the data to restore musical perception in people with brain damage in the future.
“As this entire field of brain-machine interfaces progresses, this offers a way to add musicality to future brain implants for people in need. It gives you the ability to decipher not just the linguistic content, but some of the prosodic content of speech, some of the affectivity. I think this is really where you try to crack the code. that’s where we started,” said University of California professor and study author Robert Knight.
Did you like the content? Therefore, follow the latest science news here. Technology World and if you wish, listen to the “strange music” that astronauts listen to during their journey to the Moon.
Source: Tec Mundo

I’m Blaine Morgan, an experienced journalist and writer with over 8 years of experience in the tech industry. My expertise lies in writing about technology news and trends, covering everything from cutting-edge gadgets to emerging software developments. I’ve written for several leading publications including Gadget Onus where I am an author.