Psycholinguistics/Language and Music

Introduction edit

"Music is the universal language of mankind".  ~Henry Wadsworth Longfellow. [1]

At first glance, there are some clear connections between language and music. When we begin to dig even deeper though, adopting the psycholinguistic perspective, we find the two are even more closely-related than we may have initially thought! From the basic components, uses and generativity, to the brain systems and research being conducted, there is a long-standing tie between the psychology of linguistics and music. This page will serve as an overview into the fascinating world of both language and music, and begin to examine why the two are often linked together, as well as where their differences become evident. Also, past and current research into the possible benefits of using music to help with language and memory will be discussed. We will take a look at some of the problems found with neuropsychological processing, and finally present you with an opportunity to perform your own research! Regardless of whether we consider ourselves aspiring musicians, or simply love to listen, music is a huge part of everyone's lives, and is quickly becoming one of the most fascinating aspects of modern psycholinguistics.



Breaking It Down: Comparing the Basics of Language and Music edit

Similarities and differences between music and language can be seen even when each is broken down into their simplest components. From the basic sounds and meanings, to the overall sentence, story or song, music and language are closely related.

Comparison Level Language Music
Basic Sound Units Language, in its most basic form, can be broken down into phonemes.[2] Music is made up of different notes. [2]
Vocabulary Languages often use letters or symbols (English uses a 26-letter alphabet) to form words. Music has twelve different notes in an octave, which then repeats in a higher or lower pitch [3]
Logic Both language and music use a succession of sounds that can be seen as either "right" or "wrong". There are certain words or sentences in languages that make sense, and others that don't. The same can be true for music [4]; some note sequences sound good together, while others do not (though this can also vary across cultures)[5]
Production There is a range of production abilities in language. We all fluently speak a language (whether it be verbal, sign language, Braille, etc.), we are able to communicate using some form of a language. Sometimes our communication abilities may be more limited, or sometimes we may be very adept in communicating. With music (as with language) there is a continuum of abilities. Not everyone is so fluent in music. While we may all be able to enjoy listening to music, not everyone can play, sing or write music the way we can with languages, though some people are masters at music production. [3]
Interpretation Interpretation is an essential part of both language and music, though differs slightly in its definition for each. Interpreting a language means to understand it, so that a spoken word or sentence means the same thing to many people. In music however, interpretation doesn't have to mean understanding, but can mean anything as simple as production or performance of music. Everyone may not have the same interpretation of a piece of music, but would still be playing the same notes when performing it [4].
Function Language is used as a means of communication and is essential for creating social bonds. Music can be used for communication as well, but is primarily a source of entertainment, or a means of personal expression [2]
Building Blocks Language is composed of phonemes, making morphemes, and syntax, which creates sentences that are built into stories and language itself. Music begins with notes which can make chords or sequences that are combined into phrases of music that make a complete song, including affect (which gives feeling and emotion or meaning to the piece) [2]


Building on the Basics: Generativity edit

The term generativity is often used when talking about aspects of language composition. Generativity (similar to the concept of creativity) is defined as an ability to generate, or power to produce something [6]. Using knowledge about the basic units and components that make up language and speech, or music, it is possible to create a vast amount and variety of combinations from these basic building blocks.

Language edit

 
A basic English syntax tree showing the generative abilities of grammar

Language is made up of phonemes (sounds) and morphemes (smallest units of meaning), which create words that combine using syntax and semantics, giving words order, meaning and grammar [7]. By using these words, the generativity of language is infinite [3]. This means that it is possible to create an endless number of different sentences by simply creating different combinations of these phonemes and morphemes. Some of these sentences are more common, and make sense to us, while others seem absurd, but are still "sentences" as far as language is concerned. This is where knowledge of grammar comes into play. Grammatical rules can be applied across situations, allowing us to identify sentences as correct or incorrect [3]. Using these rules, we are also able to interpret the meanings of sentences that we have never heard before, which is a necessary skill when learning to converse with others. One of the most common examples of generativity in language was created by linguist Noam Chomsky. He put forward the sentence "colourless green ideas sleep furiously" as an example of a grammatically correct sentence, but one that has no understandable meaning [8]. This is a perfect example of a sentence that we have likely never heard before, and know to be semantically incorrect, but are also able to identify as a legitimate sentence, having learned the rules of grammar and language. It is possible to create an infinite number of sentences using our knowledge of how language works.

Music edit

Music is similar to language in its hierarchical structure. Notes can be combined into chords or sequences, which can create a melody or harmony line of a song. Songs then have affect, which adds emotion and meaning to the music being played. Again, using these notes and musical "rules", the generativity of music is infinite [3]. Endless numbers of melodies can be created using various combinations of these musical notes, and it is possible for everyone to understand and perceive them as music. Like language, some combinations are understood to be correct, while others sound incorrect. Often, this perception of "right" or "wrong" combinations of notes is based on culture and musical upbringing and influence [5]. It has been shown that music and musicality are products of our cultural and social interactions. Though the ability to appreciate and understand music is universal, music IS and DOES different things in different cultures, and the boundaries of "music" may have a very minimal overlap between two different groups [9]. That said, each culture is able to create a diverse collection of melodies, and everyone is able to perceive these as musical or "un-musical" depending on their cultural understandings.

Overall, language seems more universal, whereas music is more culturally-based. Both music and language show evidence of a hierarchical syntactic structure, whether it be the structure of the phrasing of sentences, or the emphasis of the melody lines. As well, both language and music have vast generative powers, as we could create an infinite number of sentences in language, or melodies in music. We will always be coming across new sentences or sound sequences that we have never heard before.


Brain Systems: Beyond the Surface edit

Language edit

 
Graphical depiction of the brain, highlighting Broca's Area (blue), Wernicke's Area (green) and the Primary Auditory Cortex (pink)

The ability to speak human languages, and process and understand complex sentences has long been considered the distinguishing feature that sets humans above all other species intellectually [10]. What evolutionary advances have allowed for this process in the human brain?


♦ Several specific areas in the brain have been identified as being crucial for language processing. Broca's Area (first identified by Paul Broca in 1861) is located in the left inferior frontal cortex, and was the first "language center" of the brain to be singled out [11]. Broca identified that damage to this area of the brain affected an individual's ability to express verbal or written language, despite any deficits to the person's understanding of language, or any other physical deficits that could have been responsible for the impairment.
♦ Shortly after, a second language center was identified. Carl Wernicke discovered an area in the posterior left temporal lobe, which he named Wernicke's Area. He realized that damage to this area caused incoherent or nonsense speech production from individuals, implying that they had deficits in understanding language as opposed to actual language production [11].
♦ Viewed most simplistically, the left hemisphere of the brain (in respect to language) can be viewed as a neural loop, with Broca's Area at the frontal end controlling language output, and Wernicke's Area at the posterior end controlling the processing of language input [11]. A large bundle of nerves complete the "loop", and join these two areas. This bundle of nerve fibres is known as the arcuate fasciculus [2]. This configuration is not exclusive to people who rely on verbal communication, as the same set-up and functional distribution is seen in individuals that use sign language [11].
♦ One other area of the brain that is crucial for language was discovered much more recently. Introduced by Norman Geschwind, the inferior parietal lobule (also known as Geschwind’s territory) is connected to both Broca's and Wernicke's areas by nerve bundles, and is a key addition to understanding the neuroscience of language in the brain [11]. So, not only can information travel between these areas by way of the arcuate fasciculus, but could also be passed through this inferior parietal lobule. It is necessary to note the important location of this lobule, lying at the connection between the auditory, visual and somatosensory cortexes, all of which the lobule itself is also connected to [11]. Hence, the neurons in this region are capable of processing auditory, visual and functional aspects of words; a necessity for understanding language. Though very simple versions of the inferior parietal lobule exist in other species, it seems to have been one of the last evolutionary steps in the human brain, assisting in a possible explanation for why our language abilities are superior to our relative species [11].
Though it seems that the majority of language processing occurs in the left hemisphere for most people, the right hemisphere plays an important role as well. Without the right hemisphere, it is almost impossible to distinguish between the literal (what words are being used) and figurative (how the words are being used) meaning of language. People who have damage to the right hemisphere of their brain are not able to distinguish between denoting and connoting; they can not understand parts of language such as emotional connotation or sarcasm [11].

Music edit

There are many different ways of thinking about music, and it can be argued that different methods of music processing require different areas of the brain.
♦ If you were to simply hear music, you would be relying on the auditory cortex of the brain, in which different cells correspond and respond to different frequencies of sound. The core auditory cortex regions are responsible for analysis of pitch and volume, whereas outer surrounding regions analyze timbre, rhythm and melody [12]
♦ If you were to imagine, or think about a song in your head, the areas of the brain used would differ slightly. Again, the auditory cortex would be used, but only in a few isolated areas and to a lesser degree than when you listen to music. As well, imaging a song in your head requires you to first recall the lyrics and melody from long-term memory stores, and then hold the song in your working short-term memory. This means also using the inferior frontal gyrus (memory recall), and dorsolateral frontal cortex (working memory) [12].
♦ One of the most widespread activities, in terms of brain areas used, is playing music. The auditory cortex is needed to hear the music, and different feedback processes occur here to let us know whether we have achieved the correct pitch and melody. The visual cortex is also active, as we read (or even simply imagine) a piece of sheet music. The parietal lobe is needed for several of the complex tasks such as estimating the positioning of our fingers to play a certain note. The motor cortex helps us perfect the fine motor skills needed to play. Our sensory cortex is activated with each touch of the instrument as we play. The frontal lobe is used in the planning and overall coordination of our movements and activities, and the cerebellum works with the motor cortex to allow the production of the smooth, flowing movements often required in playing an instrument. The premotor area is also involved in the timing and sequence of our movements, but the exact role of this area of the brain in processing has still not been concretely defined [12].
♦ Have you ever had the sensation of getting goosebumps when listening to a piece of music? This emotional reaction to music actually uses other areas of the brain as well! When you get chills or react to a piece of music that you like, this reaction is caused by the structures in the inner brain, such as the ventral tegmental area. These inner structures are responsible for the pleasure of "reward", such as receiving food when we are really hungry. If you hear a song you really like, the amygdala is also inhibited (as it is responsible for negative emotions, like fear). This emotional reaction can vary depending on our feelings about the song [12]. Clearly, all of the various aspects of music production and processing use a large variety of skills and abilities, and therefore, a large number of brain areas and structures. When all of these areas of our brain start working together we are able to develop a full appreciation of all of the aspects of music.

Similar Systems edit

Music and language are processed using very complex brain systems, but it turns out that the processing of both music and language require many similar areas of the brain.
One very interesting similarity is seen in the brain activity of processing language, and that of musicians specifically. Many scientists understand that much of language processing occurs in the left hemisphere of the brain (in the majority of people). Recently, it has been found that musicians use these areas in the left hemisphere for music processing as well, whereas non-musicians typically rely on the right hemisphere. This has led to the belief that musicians may actually process music more analytically (as they engage more of the left hemisphere, similar to the areas used for language processing) than non-musicians [13]. Many areas of the brain that are used in processing language and music are almost identical. Now, with the help of positron emission tomography (PET), it is easy to visualize the active areas of the brain when attempting to process language or music. Some of the areas where we see the most overlap include the motor cortex (primary motor cortex and surrounding areas), Broca's area, both the primary and secondary auditory cortex, the cerebellum, the basal ganglia and thalamus, and the temporal pole [14]. Also, though there is a general belief that language primarily uses the left hemisphere, while music primarily uses the right hemisphere, there were several structures in both hemispheres that were used for both language and music. Again, PET scans were used to record the electrical activity in the brain. When music was played, the front of the brain and the right hemisphere were primarily engaged, though the majority of visible areas showed some activity. When processing language, the front of the brain and the left hemisphere were primarily used, though again there was activation of structures in many different areas. Then, of course, when both language and music were being processed at the same time, there was a concentration of brain activity in the frontal structures and in both hemispheres [15].

Problems: What Happens When Things Go Wrong? edit

As with many areas of neuroscience, there is always a question of the possible effects that may be seen when imperfections or abnormalities occur in the brain. Brain damage can occur as a result of injury (damage to the brain, causing alterations), or as a result of congenital effects (abnormalities from birth). [16]

 
Amusia (or "musical deafness") can affect an individual's ability to correctly detect pitch

It is believed that humans are born musically-inclined, and the damage to related brain areas causes problems in the ability to process music properly. One of the common results of brain damage (as it pertains to music processing) is referred to as Amusia. One of the common names for amusia is "musical deafness", because it affects an individual's ability to properly detect pitch (recognize out-of-tune notes, for example), as well as to recognize familiar songs. [16] In terms of music production, amusia affects an individual's ability to sing, whistle or hum, write music (also known as "agraphia"), or play an instrument (musical apraxia). [16] It is difficult to discriminate between melodies, and songs that may have been well-known before the brain damage occurred may become unfamiliar. It has been thought that individuals with congenital amusia may also experience difficulties discriminating and identifying changes in intonations as well (raising the tone of your voice at the end of a question, for example), which additionally affects their language processing abilities. [16]

Music and Memory: What Have We Learned? edit

The idea that music may help improve memory has been very well-researched over the years. With new information about the brain, and new techniques for neuroimaging, it has become possible to extend research further and further each year, targeting more specific areas of the brain, and observing different groups of people. For example, we are now able to observe the role of music in the brains of infants through early childhood. Research has shown that if babies actively engage in music education or musical participation, they will show increased brain development[17]. As well, many of these musical abilities (though different across cultures) are essentially innate and universal.

Musical training early in life may also help improve one's memory. MRI scans allow researchers to view specific areas of the brain, and they have shown that, on average, the left temporal regions of the brain are actually larger in musicians than non-musicians [18]. This area of the brain assists in verbal memory tasks, and therefore should allow musicians to perform more successfully on tests of verbal memory. Chan, Ho and Cheung collected evidence showing that individuals who had been musically trained before age 12 actually did perform better on tests of verbal memory than individuals who had not received this early training. It is understood that environmental factors will come into play, but still supports the idea that music could be extremely beneficial to memory and brain development, especially when it occurs early in life.

Try it yourself! edit

As we have seen, music and language are very closely-related. They use similar areas of the brain in processing, can be paired together to assist in memory formation, and even the structure and generativity of the two are very similar. Clearly, music is an important addition to the study of linguistics and psychology, and enhances the understanding of how these topics have developed. Now, using everything we have learned about language and music, test how much you now know about these topics below!

Interactive Learning Exercises edit

Apply your knowledge of psycholinguistics, language and music here

References edit

  1. Quote Garden (2006). Quotations About Music and Language.[1]
  2. 2.0 2.1 2.2 2.3 2.4 Newman, A. (2011). Psycholinguistics 3190 Lecture Slides, Dalhousie University.
  3. 3.0 3.1 3.2 3.3 3.4 Newman, A. (2010). Psycholinguistics 3190 Lecture Slides, Dalhousie University.
  4. 4.0 4.1 Adorno, T. W. (1956). Music and Language: A Fragment. Quasi una Fantasia, Essays on Modern Music. Accessed online: February 2, 2011. [2]
  5. 5.0 5.1 Radice, M. (2011). Anthropology 1100 Lecture Slides, Dalhousie University.
  6. Dictionary.com (2011). Generativity: define generativity. Ask.com. Accessed online: February 7, 2011. [3]
  7. Nevid, J.S. (2009). Psychology: Concepts and Applications. Cengage Learning, Inc. Boston, MA. 592 pages
  8. Wikipedia. (2011). "Colourless Green Ideas Sleep Furiously". Accessed online: February 2, 2011. Colorless green ideas sleep furiously
  9. Cross, I. (2001). Music, Cognition, Culture, and Evolution. Annals of the New York Academy of Sciences. 930 (28-42). DOI: 10.1111/j.1749-6632.2001.tb05723.x
  10. Patel, A.D. (2007). "Language Music and the Brain". Oxford University Press. New York, NY. 528 pages
  11. 11.0 11.1 11.2 11.3 11.4 11.5 11.6 11.7 Debuc, B. (2002). "The Brain From Top to Bottom". Canadian Institutes of Health Research: Institute of Neurosciences, Mental Health and Addiction. Accessed online: February 7, 2011. [4]
  12. 12.0 12.1 12.2 12.3 Fick, S. & Shilts E. (2006). This is Your Brain on Music. Canadian Geographic Magazine. Accessed online: February 3, 2011. [5]
  13. Gordon, H. (2006). Brain Waves. Canadian Geographic Magazine. Accessed online: February 3, 2011. [6]
  14. Brown, S., Martinez, M.J., & Parsons, L.M. (2006). Music and language side by side in the brain: a PET study of the generation of melodies and sentences. European Journal of Neuroscience, 23, 2791-2803.
  15. Chudler, E. (2011). "The Brain and Nervous System". Human Diseases and Conditions. Accessed online: February 18, 2011. [7]
  16. 16.0 16.1 16.2 16.3 Wikipedia (2011). "Amusia" Amusia Accessed online: March 1, 2011.
  17. Hodges, D.A. (2000). Implications of Music and Brain Research. Music Educators Journal. 87(2): 17-22
  18. Chan, A.S., Ho, Y.C. & Cheung, M.C. (1998). Music training improves verbal memory. Nature. 396(128) doi:10.1038/24075.

Images

  1. http://en.wikiversity.org/wiki/File:Alternative.jpg
  2. http://en.wikiversity.org/wiki/File:Brain_image1.jpg
  3. http://en.wikiversity.org/wiki/File:Hdc_0000_0001_0_img0004.jpg
  4. http://en.wikiversity.org/wiki/File:A_C_D_notes.svg