Listening, as opposed to hearing, is an activity which requires focus, attention, and concentration.
While hearing is the physiological process of perceiving sounds through our ears, and can be considered as what happens in our brain when our hearing apparatus translates acoustic energy into neural signals, listening, on the other hand, happens when our brain is focused on the sound perceived to derive a meaning from it. Luckily our brain can actively listen to a limited number of sounds perceived at any time sparing us from potentially being constantly overwhelmed by the huge amount of unnecessary noises and background sounds that are produced all around us all the time.
This is what is called the Cocktail Party Effect, and it is the act of automatically filter out extra sounds and focus on the important ones moment by moment, allowing us to direct our attention to critical sounds we need to be aware of at that particular moment (being it a conversation in a pub, of a bear in a forest).
Here is an interesting video by Mark Mitton which practically shows how the cocktail party effect works:
As music composers and sound designers, this is great news, as we can take advantage of this phenomenon to direct the attention of our audience towards specific sounds and events fundamental to the narrative of the scene (moment or act).
So how can we categorize various sounds and music to coherently direct the audience's attention?
In a recent article on narrative sounds and music, we talked about the relationship between music, sounds and our perception of the natural world, and Michael Chion's modes of listening were briefly mentioned as an important background framework to understand how we can superimpose qualitative features to sound and music.
So what are the other modes of listening? And why are they important (and useful) to content creators, sound designers and composers?
Chion is a french music theorist and experimental composer, and in his Audio-Vision: Sound On Screen, published in 2005 and considered in the industry as the ultimate book on the relation between sound and images, he elaborates on the idea that we experience the activity of listening in three distinctive modes:
1 - Causal Listening
Happens when we hear a sound and try to locate a source for it. Causal listening reinforces the concept of cause and effect and, as Beauchamp writes in Designing Sound For Animation (2013), the practical role of causal listening can be encapsulated in the phrase "we don't have to see everything we hear, but we need to hear most of what we see".
2 - Semantic Listening
Refers to a type of listening connected to a code or language. In this mode of listening sounds are embedded with logical and/or grammatical rules that provide them with a direct meaning. Language, for example (whether we understand it or not) is a type of semantic sound, as well as morse code is. When a sound is semantic, it carries a message to be interpreted.
3 - Reduced Listening
Is the extrapolation of the fundamental elements and characteristics of a sound, such as pitch, rhythm, tempo, timbre, etc. As mentioned in this article about sound and narrative, reduced listening becomes extremely important when the creation of a new meaning is desired, and elements of a particular sound can be paired to objects not necessarily related to that specific sound to generate these new meanings.
A great example of how the modes of listening can be practically applied in music can be found particularly in animation.
Animation is an interesting area of filmmaking as everything you see and hear on screen is crafted by a team of sound designers and animators and the immersive feelings is often provided by sound and music which add a sense of depth and realism to the images shown.
Often in animation, music directly follows and scores movements and events happening on screen, a practice regarded as Mickey Mousing, as early cartoons and animation movies used music to provide the sound effects of the production.
For example, when a quirky character is walking, a cliché has always been to follow the character's steps with a tuba playing intervals of fifths.
This is a great example of music to provide both a causal sense, the character is producing the tuba sounds, and reduced one: the tuba, a musical instrument from the brass family, is used to provide the sound of the steps.
Here's a short introduction to Mickey Mousing by The Musicologist:
An amazing example of semantic music can be found in the video game Don't Starve. Here all the characters are voiced using musical instruments playing strange phrasings which are supposed to emulate a language.
Even though we are not supposed to actually be able to decode the language per se, we perceive that it has a meaning and the characters speak using a set of rules which we do not know and define their language. For instance, an oboe phrase, has a semantic meaning and when we listen to it, we can understand that it means something, even though we still need subtitles to decode this meaning.
Of course, theorists have developed other modes of listening and categories, however, these examples show how Chion's modes of listening can provide us with a useful framework in which to categorize and define how sounds and music affect the visual experience of the target audience, and when conceptualising the soundtrack for any audio-visual media, considering the importance of any aural event in the narrative of the story being told through these modes of listening can help us in crafting a structured soundtrack which doesn't clutter the attention energy of the audience, but, instead helps in directing their experience and focus.