Consider the feeling that you get while listening to your favorite song. Does this song “speak” to you? Both music and language have profound impacts on society and are fundamental parts of the way we connect and communicate with each other. Although not directly obvious, these two modes of expression share many of the same characteristics in terms of how they are processed in the brain and how we interact with them on a daily basis. So how are music and language related? In order to fully understand the significance of this relationship, we must consider the evolution of music to language over time (and the neurological relationship), the uses of music as language that currently exist, and the future benefit/usefulness of the relationship going forward.
Evolution of Music to Language
Forms of music have been found in societies throughout the world dating back to the earliest civilizations. This knowledge begs the question: was music our original language, and if so, how has this evolved? Nobuo Masataka, who holds his Ph.D in the ethological study of vocal communication in New World primates from Osaka University, is quick to point out that the evolutionary significance of music is up to debate. Some scientists don’t believe that music has had any role in the evolutionary journey to our present day speech, while other scientists are avid believers that music had a role in this process. Research done on human infants does, however, find that babies are born with the inherent capacity to understand and respond to music, and this capacity sheds light on the possible evolutionary significance of music in relation to our current language system. Similarly, non human primates like gorillas show trends of ‘singing’ behavior in their own communication. Darwin proposed that there was possibly a stage before our current semantic language system that is more similar to music than to our current language, and evidence in regard to prehistoric music hints at this theory. Darwin theorized that music as a form of communication served to demonstrate “fitness” to a mate. The ability to reproduce and survive was of fundamental importance and the ability to communicate one’s strength and viability through song was crucial. In many species, males are typically the one’s to “sing,” as they are also the one’s that are vying to impress the female mate (Miller). Both females and males participate in song among humans, and music was generally used during communal activities like ceremonies, hunting and rituals. In the context of primates, “music” and “singing” is defined as the use of rhythm, pattern and tone in verbal communication, which doesn’t exist in our modern, traditional linguistic syntax. Music was often used as “call and response” in activities like hunting and games in order to convey mood or emotion. “Regarding survival, societies with a musical culture may be better able to survive because the music coordinates their emotions, helps important messages to be communicated within the group (in ritual), motivates them to identify with the group, and motivates them to support other group members” (Miller). Although the original form of music was most likely the human voice (clicking, coughing, whistling), instruments such as bone flutes and percussions have also been found in prehistoric civilizations as far back as 60,000 years ago (Arensburg). The presence of such instruments points to a progression in music and to the significance of music in past human civilizations.
Mammoth Bone Flute (Image via the The Archaeology News Network)
Music has the ability to portray emotion, and this ability to communicate meaning and mood is fundamental to its relationship with language. An example of using music and rhythm as a form of communication is in motherese, the method of communication between mother and baby. Motherese is communication involving rhythm, melody, and movement patterns. The main functions of motherese are to strengthen the bond between mother and child, and to facilitate the process of language acquisition. These functions increase the chances of survival by improving the bond between mother and baby and by “strengthening the fetus’s ability to perceive the mother’s emotional state by listening and responding to the internal sounds of her body (heartbeat, digestion, etc)” (Dissanayake). The fetus’s ability to interact with the rhythmic and melodic patterns of the mother is crucial to bonding and development. The innate capacity for babies to understand and utilize rhythm and melody for communication purpose along with ample evidence pointing to pervasive use of music in prehistoric civilizations suggests that music has played an important role in the development of language. Although whether our current language evolved directly from music is still up to debate, music has been widely used as a form of communication in our biology and in civilizations past.
The ways in which we process language and music in the brain also support the notion that music and language share a common purpose. Although there is no clear answer as to whether or not language is a derivative of music, there does exist a significant and clear relationship between the processing of the two in the brain. The similarity in music and language lies in their construction: both music and language are constructed of concrete and hierarchical sequences that follow similar syntactic principles (Patel). Considering this similarity, how do we cognitively process music and language? Is there overlap in the ways in which we cognitively perceive the two, or are they processed by separate brain systems? First, we must establish how we define music. Although there are many aspects of music, the most extensive research on music processing has been done on the syntax of music. Syntax can be defined as “a set of principles governing the combination of discrete structural elements (such as words or musical tones) into sequences” (Tillmann). Linguistic syntax is an organization of words in order to create meaningful sentences in language. Just as syntax in language enables us to create infinite words and sentences from a set number of characters, musical syntax allows us to create an infinite number of melodies and harmonies from a set number of notes. Our brain assembles both letters and notes into logical structures and we are thus able to comprehend the final product (sentences/songs). It is also important to note that when we refer to music in regard to cognitive processing, this music refers to Western European music of the tonal period (popularized in 1600-1900). The majority of research done on cognitive processing of music has been focused on Western tonal music largely because it encompasses a system that is an organized and systematic pattern of tones, pitches and chords. Essentially, this type of music is the most structured and most closely follows syntactic rules (this is not to say other forms of music do not follow syntactic structure). For a more extensive explanation of Western tonal music: https://www.youtube.com/watch?v=dV0kompCzcY
Although music and language do share similarities in terms of structured syntax with respective “building block” components, there are fundamental differences that make it difficult to treat the two equally. Are the “neural operations underlying syntactic processing” different for music and language (Patel)? The evidence in response to this question provides contradictory answers. Studies based on neuropsychology evidence have determined that there is a dissociation between the processing. A clear example of this is amusia (inability to understand musical syntax) vs. aphasia (inability to understand linguistic syntax). Certain individuals who have suffered brain damage can show signs of amusia but lack any signs of aphasia. Similarly, individuals have been found to lack the ability to comprehend language, but retain the ability to process music (Patel). These studies point to the fact that music and language processing are largely separate.
However, equally as convincing neuroimaging evidence points to an overlap between the way we process musical syntax and linguistic syntax. Aniruddh D Patel, author of Music, Language, and the Brain and a current psychology professor at Tufts University, investigated this relationship using a machine that measured the P600 event related potential (ERP). This potential is “a positive brain potential elicited by syntactic (rather than semantic) processing which starts soon after the onset of a word” (Patel). When presented with musical syntax and linguistic syntax, individual’s ERP numbers did not significantly vary between the two. Furthermore, neuroimaging research has proven that processing musical syntax activates “language areas” of the brain (Patel). Studies using both magnetoencephalography and functional magnetic resonance imaging point to an activation of both Broca’s and Wernicke’s areas of the brain in harmonic processes. Both Broca’s and Wernicke’s areas are fundamental parts of language processing, and strong evidence supports the notion that musical syntactic processing similarly uses these areas.
The paradox between neuropsychology based evidence and neuroimaging based evidence is significant, and highlights the reality that there is still much research to be done in order to fill in the gap between these two areas of research. One theory that might hope to resolve this difference is the concept of syntactic representation vs. processing. If we look at the fundamental differences between syntax of language and syntax of music, it is clear that musical syntax lacks many components of linguistic syntax. For example, linguistic syntax encompasses grammar and tenses whereas musical syntax does not. A sequence of tones and chords does not require the same dependencies as do the components of a sentence. This suggests that “words have more intricate syntactic features built into their representations than do musical chords” (Patel). Thus, a way to bridge the possible contradiction of evidence is to say that the representation of both musical and linguistic syntax is different in the brain, but the way we make sense of and synthesize this syntax is largely the same.
Music and Language in Society
Although the science behind syntax processing and the evolutionary significance of music still has further work to be done, the ways in which we interact with music as a form of language is significant and pervasive in our daily lives. Ask yourself: why do you listen to music? What is the functional purpose of music in pop culture, in the media, and in advertisements. Music has the capacity to evoke emotion and to ‘speak’ in its own respective way. I recently conducted an experiment among various students attending Duke University. The basic question of the experiment was “If I expose various students to songs with varying melodies (slow melody, medium melody, fast melody) and ask them to describe this song in a sentence, how will they respond?” Will their response correlate to the syntax of this music (melodies, tones, etc.)? My findings supported the possibility that melodies do, in fact, speak their own language. When exposed to slower melodies such as (https://www.youtube.com/watch?v=Oa7UVkVwV7U), students tended to respond with sentences describing sadness, nostalgia or relaxation. In contrast, upbeat melodies tended to spark sentences regarding happiness, excitement or love. Listen to this song (https://www.youtube.com/watch?v=zbj14ANdkl4) and describe it in a sentence. One student described this song with the sentence, “I want to jump out of bed, step out into the hallway and dance all the way to class.” Was your sentence similar?
Even though none of these songs had any linguistic syntax involved, individuals were able to process the musical syntax and transfer the syntax into a linguistic form (music to sentence). Our ability to transfer information from musical syntax to linguistic syntax is a significant feat and highlights the relationship between music and language. Our ability to relate these two also sheds light on the impact the relationship between music and language plays in our present day society. Our culture in the United States is largely based on pop culture, whether we acknowledge it or not. Movies, advertisements and media play a huge role in our daily lives. The music-language relationship is pervasive throughout this culture. For example, the quality of a movie is largely based on the success in which the music-language relationship is utilized. Prestigious award shows have become a fundamental component of our pop culture and commend motion pictures that are able to strengthen the theme and message of the movie through the language of music. The Dark Knight won the Grammy Award for Best Soundtrack in 2008 because of it’s success in “creating a mood” and providing the audience with an alternative form of language to supplement linguistic dialogue and action (https://www.youtube.com/watch?v=9Wpz0D09Uv0). Listen to the soundtrack and ask yourself, what emotions are evoked? If you were to put the musical syntax you have processed into words, what would the sentence be?
Similarly, advertisements commonly utilize this music-language relationship as well. In order to evoke an emotional response, advertisements will strategically use various forms of music in order to convince buyers to consume something. When talking about advertising, studies often point to classical conditioning as the means of persuading buyers. Basically, the colors, music and effects used in the ad directly influence the way you feel about the product. In order for the idea of classical conditioning influencing buyers to hold true, however, it must be implied that music holds the fundamental ability to convey emotion and communicate a message that can be conditioned. For example, “music with an upbeat sound might stimulate the development of beliefs that a particular color of a pen is a fun color or this it is appropriate for an active lifestyle” (Gorn). Advertisements use music’s ability to “speak” to an audience in order to supplement the item or event being portrayed. Music is the language that we don’t consciously perceive, but it serves and affects us in the same way that language does.
Music is used as it’s own form of language; it evokes emotion and response similar to the effect of a sentence or dialogue. Although music can be used on it’s own as a form of language, as explored in my Duke study, it is more commonly used to supplement linguistic syntax in order to strengthen it and make it stronger. This music-language relationship is commonly used throughout our culture and is a fundamental part of our daily lives.
Benefits Moving Forward
Considering both the evolutionary and neurological evidence supporting the overlap between music and language and the extensive use of music as a language in our culture, the next step is to consider the future benefits of this relationship and how we can utilize these benefits moving forward.
An important and growing field that is based on this music-language relationship is that of music therapy. “Music Therapy is an established health profession in which music is used within a therapeutic relationship to address physical, emotional, cognitive, and social needs of individuals.” More specifically, “music therapy provides avenues for communication that can be helpful to those who find it difficult to express themselves in words” (What Is Music Therapy). Music therapy is a relatively new field that uses music as a way to communicate when linguistic, verbal language is not an option. The inability to verbally communicate may stem from a tragic injury to the brain or could be present from birth, but regardless of the reason, music can often be used as a substitute for language in such cases. Felicity North, an experienced music therapist, explores her experience with music therapy and how music serves to facilitate communication and the formation of relationships. Although we typically think of bonding and/or relationships as vocal and explicit, North explains her own experience and how this was different. When in other countries with individuals speaking different languages and expressing foreign customs, it is often hard to communicate with the traditional linguistic “language.” In this situation, we rely on sound, silence and music in order to bridge this gap and in order to connect us when our traditional idea of language cannot. Sound offers us with a unique ability to communicate, to be involved and to build relationships through a means other than formal language. In North’s case, she is working with individuals who don’t have the ability to communicate with formal language, but sound and music give them the ability to make meaning and to “contribute to a whole that is greater than the sum of its parts” (North).
Music Therapist Brian Jantz Using Music to Communicate With a Child. (Image via CNN News)
Music gives us the ability to communicate and work together, something that language often cannot. Language is taking turns and listening whereas North brings up the point that music has the ability to intertwine many different parts and sounds to make something even greater. If we think about songbirds singing together or even humans making music together, the symphony of these sounds makes something truly magical and often communicates a mood or a significant message. Typical language communication doesn’t have this same effect. Music therapy seeks to use music as a means of evoking response from those who aren’t able to converse with typical syntactic language. A certain ballad of Beethoven evokes a different response than an instrumental version of a rock concert, and each of these respective melodies evoke different responses.
We often don’t consciously recognize how music affects us and how often we interact with music. Before my exploration of this topic, I was blind to the notion that we use music as language every single day. Throughout this study, I became more aware of the ways in which I interact with music and I realized what a pivotal role music has played in my life. I began to be meticulous about the music I played and the music I chose to share with others, knowing that music is just as influential as language. Would I say this aloud to someone? Would I fill my head with these thoughts? Why would I listen to music with this emotion or message during this specific event? By being aware and conscious of the ability of music to convey message, I was able to use this to my benefit and felt more in control of my mood and how I conveyed things to others. I encourage everyone to pay more attention to music and it’s significant ability to communicate emotion and mood that language often cannot. Music is it’s own language and using it correctly can supplement communication and interactions.
Moving forward, the capacity for music to be used as a form of language can be used in our human interactions in order to strengthen and supplement relationships. If we are aware of this, this knowledge can benefit us in all walks of life. To support an argument, music can be included in order to portray a message or emotion. In relationships, the use of a song or melody to communicate can portray a feeling better than language alone could. We should aim to be more aware of the benefits the music-language relationship can bestow and further utilize these benefits in order to enrich our lives.
B. Arensburg, A. M. Tillier, B. Vandermeersch, H. Duday, L. A. Schepartz & Y. Rak. 1989. “A Middle Palaeolithic Human Hyoid Bone”. Nature 338: 758–760.
Dissanayake, E. 2000. “Antecedents of the Temporal Arts in Early Mother-infant Interaction.” The Origins of Music: 389-410.
Feld, Steven, and Aaron Fox 1994 Music and Language. Annual Review of Anthropology 23:25–53.
Gorn, Gerald J.. 1982. “The Effects of Music in Advertising on Choice Behavior: A Classical Conditioning Approach”. Journal of Marketing 46: 94–101.
Masataka, N. 2007. “Music, Evolution and Language.” Developmental Science 10: 35–39.
Miller, G. 2000. “Evolution of Human Music Through Sexual Selection.” The origins of music: 329-360.
North, Felicity. 2014. “Music, Communication, Relationship: A Dual Practitioner Perspective from Music Therapy/speech and Language Therapy.”Psychology of Music 42.6: 776-90.
Patel, Aniruddh D. 2003. “Language, Music, Syntax and the Brain.” Nature Neuroscience 6.7: 674.
Rothenberg, David and Marta Ulvaeus. The Book of Music and Nature. Middletown: Wesleyan University Press, 2001.
Tillmann, Barbara. 2012. “Music and Language Perception: Expectations, Structural Integration, and Cognitive Sequencing.” Topics in Cognitive Science 4: 568-584.
“What Is Music Therapy.” American Music Therapy Association. Accessed December 5, 2015. http://www.musictherapy.org/about/musictherapy/.