HIRE WRITER

Audio and the Brain

This is FREE sample
This text is free, available online and used for guidance and inspiration. Need a 100% unique paper? Order a custom essay.
  • Any subject
  • Within the deadline
  • Without paying in advance
Get custom essay

Music has been a major force in almost every culture on earth, both today and throughout much of our history as a species. An increasing number of studies have sought to shed some light on how this universal constant in human society around the globe impacts the brain. This paper will begin with a basic overview of how we can listen to sound and music through our auditory mechanism, followed by a look at current research into music’s effects on the brain, from the auditory cortex, to effects reaching beyond the auditory cortex into our emotional response, and lastly how music can affect the brain as a whole, helping to combat some of the negative side effects of aging in the brain. Analyzing music’s impact on us is important not just because of how powerful music can be, and how much it means to so many people, but also because of the positive consequences it can bring, such as helping elderly people.

Firstly, the auditory mechanism is what enables us to listen to music. Sound starts by entering through the external auditory canal in the outer ear, past the pinna, or the grooved outer flap of the ear aiding in amplification, concentration on certain sounds, and localization. Sound then hits the tympanic membrane, or eardrum, whose vibrations turn the sound energy into mechanical energy as it moves the ossicles, tiny bones in the ear that translate the energy to the cochlea.

These vibrations then pass to the organ of Corti, in which hair cells translate the mechanical energy into signals transmitted by auditory neurons. These audio signals are sent to the auditory cortex to be processed and interpreted by the brain, during which the brain analyzes a song or instrument’s frequency, importance among other sounds, and location, among many other facets (Garrett).

The music that the auditory mechanism allows us to hear has wide-reaching impacts on the brain, which start with the auditory cortex. There is a debate as to how music impacts the auditory cortex, more specifically whether the brain processes lyrics and tune as one signal or two separate signals. Those who argue that the tune and lyrics are processed separately point to those with aphasia, “who can’t speak, can still hum a tune,” which might indicate separate lyrics and music processing (Hamzelou).

Those who argue that they are processed as one signal point to brain scans demonstrating that “music and language activate the same areas,” which might indicate their one signal theory is correct (Hamzelou). Researchers have found that “both arguments may be partially true” (Hamzelou). They used functional MRI scans of participants listening to music with lyrics to attempt to shed some light on this debate. The theory behind the study is that as neurons repeatedly fire, the intensity of the signal decreases, and “they become kind of lazy” (Hamzelou).

So to isolate where in the brain lyrics processing and music processing lie, the researchers played a series of six songs with the same melody as the previous but different lyrics, and that as the melody is repeated, wherever there is a decline in activity should be where melody is processed. They also played six songs with the same lyrics but different melody to find where the lyrics processing neurons decreased activity because of repetition. Finally, for a control they played six songs with no similarities, and six songs that were identical (Hamzelou).

Based on the fMRI results, the researchers found that the superior temporal sulcus (STS) was responsible for processing the signal after it is processed in the auditory cortex. They found that “in the middle of the STS, the lyrics and tune were being processed as a single signal. But in the anterior STS, only the lyrics seemed to be processed” (Hamzelou). They did not find a specific area dedicated to processing the music. The researchers claimed that this “may be because no individual, complex processing occurs for melody,” before suggesting that it might exist for career musicians (Hamzelou). It would also be beneficial to further study people who speak tonal languages to see if the way they process lyrics and tune is different.

The researcher’s conclusion was that as the tune and lyrics both activated the middle of the STS, they are still a single signal at that point, but they are split and just the lyrics go to the anterior STS (Hamzelou). If this is true, the one signal versus two signal debate may find both sides half right. Not all researchers are convinced however. A researcher of the Neuroscience of Music in Sweden, Martin Braun, argues that it is too much of a leap to say that because music and lyrics both activate the middle of the STS, they are acting as one signal. He argues that the stimuli activating the same part of the brain does not necessarily support the conclusion that the brain integrates both lyrics and tune into one signal initially (Hamzelou).

However the brain processes songs, listening to music produces effects in the brain that extend beyond the auditory cortex to impact emotional centers of the brain, and by extension an individual’s emotions. A recent study (Sachs et al) sought to study the way music affects emotional response, and to compare the brains of individuals who got chills and intense reactions from listening to music to those who do not. Prior to the study, the subjects each turned several pieces of music. They were divided into two groups: those who get chills reliably listening to music and those who do not.

For the participants in the group who reliably got chills, they submitted pieces that would frequently give them that sensation. For the group that didn’t experience the sensation regularly, they simply submitted pieces that they enjoyed listening to. The researchers edited the tracks to 2 minutes based on the sections of the song that the participants self-reported as their favorite. For the study, the subjects listened to three control songs and three of their submitted songs. During the song, subjects described their feelings on a 0 to 10 scale, 0 being no pleasure listening to the song and 10 being a lot of pleasure listening to the song. If they felt a chill, they were told to hold space bar on a keyboard in front of them for as long as the chill lasted (Sachs et al.).

The results of the self reported emotional response during the song indicated that participants with more musical training and openness to new experiences tended to report higher scores. The study also indicated a difference between two types of emotional response to a song, that being a physical sensation such as one’s “ heart skipping a beat” or a “pit in the stomach,” and a more cognitive response such as “feelings of awe” or “losing sense of time” (Sachs et al.). However, the researchers noted that both kinds of emotional responses could perceive chills, and even that chills might be both a visceral and abstract response.

The researchers also reported that during the subject’s time experiencing chills, the inter-beat intervals of the subject’s hearts significantly decreased, in other words the heart rate increased, and the mean skill conductor saw a substantial increase in the skin conductance response. When comparing favorite and neutral pieces, there was some difference in the group of participants that did not get chills, but the difference between favorite and neutral pieces overall regarding subject’s IBIs and SCR was significantly more pronounced in the group that got chills (Sachs et al.).

The researchers additionally used diffusion tensor imaging to determine that the participants that get chills had higher volume that those who did not get chills in tracts going from “atlas-defined seed regions in the pSTG, towards the targets of the aIns and mPFC” (Sachs et al.). This difference was consistent for both hemispheres of the brain.

These white matter tracts were a part of the “uncinate fasciculus and the arcuate fasciculus or superior longitudinal fasciculus” (Sachs et al.). The researchers did preliminary correlation calculations between the volume of white matter and the level of emotional response, and they did find a positive correlation that was significant between right hemisphere tract volume and the subject’s individual level of getting chills while listening to music. These findings indicate that the more chills a subject got, the more volume these specific tracts in the right hemisphere appeared to be.

There was also a significant correlation between the IBI and the left hemisphere, that suggested that the more a participant’s IBI decreased during the part of the song they enjoyed the most, the more volume was in the tracts in the left hemisphere. This led the researchers to tentatively suggest that the pSTG, aIns, and mPFC are all “associated with the degree of physiological arousal that the participants experience while listening to music” (Sachs et al.).

The researchers conclude by establishing that previous research has indicated the mPFC and insula’s involvement in emotional reward and response, and offer a prediction that individuals who have high emotional reactions to music would have “increased structural connectivity between these regions and auditory processing regions in the superior temporal lobe” (Sachs et al.).

They also assert that their experimental data indicates intense differences in emotional reactions to music among individuals, and that there is a clear distinction between those who feel chills while listening to music and those who don’t. The aforementioned differences in tract volume, the more volume the more intense the reaction, the study’s correlation suggests, may be explained by a number of factors.

These might be differences in myelination, increased branching, and structural integrity of the white matter. Interestingly, the researchers note that this is similar to findings that “people who are emotionally empathic have higher white matter integrity in the temporal and frontal lobe regions also traversed by the arcuate and uncinate fasciculi,” which may indicate a correlation between empathy and emotional response to music, and maybe even aesthetics as a whole (Sachs et al.). The main thrust of this study is that it appears that individuals with stronger connections between the auditory and emotional parts of the brain, which purports music’s effect reaching beyond the auditory cortex to impact our emotions.

Finally, music can ultimately impact far more outside the auditory cortex than just our emotional responses, and may be able to play a role in combating some of the negative effects of aging. Learning an instrument can help slow “memory loss, cognitive decline, and diminished ability to distinguish consonants and spoken words” in old age (Cole) and these beneficial effects might still bear fruit even if an individual hasn’t played their instrument since they were younger. Brenda Hanna­-Pladdy, a neuropsychologist at Emory University in Atlanta, explains why this is the case; learning to play an instrument can create “additional neural connections” that endure throughout an individual’s life, which as they age “compensate for cognitive declines later in life” (Cole).

A 2003 Harvard study by neurologist Gottfried Schlaug of adults who were professional musicians and adults who were nonmusicians found professional musicians brain’s to have a “larger volume of grey matter” (Cole). The study additionally found “structural brain changes associated with motor and auditory improvements” in early childhood-aged individuals who studied an instrument for fifteen months (Cole). Other research has indicated that the longer you study an instrument, the greater the beneficial effects.

A 2011 study by Hanna­-Pladdy looked at “70 healthy adults between the ages of 60 and 83,” categorizing them as musicians, 1-9 years of playing, intensive musicians, 10 years or more of playing, and a control of nonmusicians. On standard tests measuring “nonverbal and visuo­spatial memory, naming objects, and taking in and adapting new information,” the intensive musicians scored the highest, the nonmusicians the lowest, with the 1-9 years of study individuals in the middle. This suggests that the longer one plays an instrument, the more beneficial the effects. Among older individuals, “difficulty in hearing words against a noisy background is a common complaint,” which was also indicated as a problem less prevalent in musicians and especially intensive musicians (Cole).

Interestingly, a study conducted by Jennifer Bugos, an assistant professor of music education at the University of South Florida, Tampa, found that one can start playing when they’re older and still see beneficial impacts. She studied 60 to 85 aged participants, some taking piano lessons for six months and the control group taking none. Participants taking the lessons saw “more robust gains in memory, verbal fluency, the speed at which they processed information, planning ability, and other cognitive functions” (Cole). Seemingly, music can help combat a lot of the negative effects of aging, especially helping elderly people’s cognition and hearing skills, and one can reap these benefits even picking up an instrument late in life.

Music impacts our brains from the moment we hear it, and process the tune and the lyrics, to the emotional response we can feel after processing it that may be strengthened if an individual has thicker connections between their auditory cortex and emotional response areas, and finally on a larger time-frame, in which studying an instrument can seemingly help slow negative effects of aging. There are always more questions than answers, and there is still much to be learned about the way that music impacts the brain.

Continued research is needed to solidify how the brain processes lyrics and tune, and there is ongoing study on how music can help sufferers of PTSD and other mental illnesses. Music can have a substantial impact on people; there is a reason it is so pervasive in culture everywhere. We are only just discovering the ways it impacts our brains, and there is an exciting future for further research in this field.

Cite this paper

Audio and the Brain. (2021, May 13). Retrieved from https://samploon.com/audio-and-the-brain/

FAQ

FAQ

How does audio affect the brain?
Audio can have a significant impact on the brain, affecting mood, emotions, and even cognitive function. For example, listening to music can stimulate the release of dopamine, a neurotransmitter associated with pleasure and reward, while background noise can interfere with concentration and increase stress levels.
What controls sound in the brain?
There is no one center in the brain that controls sound. Instead, sound is processed in different areas of the brain depending on its nature.
What happens to your brain when you listen to music?
When you listen to music, your brain releases dopamine, which makes you feel happy. Listening to music also activates the parts of your brain responsible for memory and emotion.
Why is audio so important?
Air pollution is often considered an invisible killer because it is an invisible and odorless gas that can be deadly.
We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Hi!
Peter is on the line!

Don't settle for a cookie-cutter essay. Receive a tailored piece that meets your specific needs and requirements.

Check it out