“With so much drama in the LBC
It's kinda hard bein' Snoop D-O-double-G.”
You’re in the audience for the Super Bowl halftime show. Dr. Dre, Eminem, Mary J. Blige, Snoop Dogg and Kendrick Lamar are performing, and you’re grooving to the music. On the big screen you can see Snoop rapping and the music hits your ears. The crowd around you is roaring and cheering. You are lost in the moment so you’re not really thinking – how do I know? How do I know that it’s Snoop Dogg’s voice that I’m hearing? How am I certain that the crowd noise around me is being made by my fellow fans? You don’t have to answer that because your brain is putting it together for you. Snoop’s lips are moving on the big screen, and they seem to match the lyrics you’re hearing. The crowd noise is coming from behind, in front, and all around you and your brain sorts that out for you so you know it’s thousands of people shouting thousands of different ways, and you need not pay attention to any one voice in the crowd.
How your brain performs these tasks is the realm of cognitive science, studied by researchers like Dr. Jonathan Wilbiks, an associate professor in psychology at the University of New Brunswick Saint John. Dr. Wilbiks is the Chair of the Brain and Cognitive Science Section at the CPA, and studies audio-visual integration and multisensory processing – how our brain takes various sensory inputs and makes an implicit decision as to whether those things came from the same source. (Your lips are moving, and I hear singing – it is likely your voice I hear.)
Recently, some of Dr. Wilbiks’ work has been around perception and musical abilities. His lab found that musicians perform much better than non-musicians at tasks when audio and visual are congruent – that is to say the image matches, and happens at the same time, as the sound. But musicians perform worse than non-musicians when audio and visual are not congruent. People who have been trained in music, more than those who have not, use auditory information to help them make sense of something they see.
When Angela Hewitt sits down at a piano and places the sheet music for the Goldberg Variations in front of her, it all makes perfect sense. Each note on the paper indicates a note she must play on the piano, and the two things are congruent – this is something she has done her entire life and it’s easier for her than it is for…well, everyone else in the world. But if, instead of sheet music, she was presented with a cuneiform tablet that represented a musical notation, she might be completely lost – in fact, someone with no ability to read music at all might be able to perform the task better than she could. (The first musical notations ever discovered were from about 1400 BC on a cuneiform tablet in what is now Iraq.)
In the years following the Second World War, the big trend in psychology was behaviourism. You put a rat in a box and expose it to a stimulus, then you measure how it reacts and how those reactions change over time with repeated exposure to that same stimulus. We didn’t really pay a lot of attention to what was actually happening in the brain. Dr. Wilbiks says this was because science is all about observable fact, and there was no way to observe what was happening in the brain at the time. Where this began to change was in the late 60s, when German-American psychologist Ulric Neisser published Cognitive Psychology, which suggested that a person’s mental processes could, in fact, be measured and analyzed. Dr. Wilbiks says,
“The cognitive researchers who showed up in the 60s said ‘there’s a whole lot happening in the brain’. Sure, we don’t know exactly what’s happening there – but something is, so to ignore it completely is a mistake. So they would try to hypothesize about what might be happening, and to try to measure using secondary observations. A simple example would be something like a reaction time task. I give you a button and tell you to push it every time a light flashes on the screen in front of you. People are generally pretty fast at that – maybe 200 milliseconds. But if I give you two buttons and tell you to press the left one when a green light flashes and the right one when a red light flashes, it will take longer. Maybe 400 milliseconds. You can’t see what’s happening in the brain, but you can surmise that if task one takes 200 milliseconds, and task two takes 400 milliseconds, that tells us very basically that processing information in the brain takes time.”
Over time, of course, things moved way past red-light-green-light as experiments became more sophisticated and technology allowed us to actually see what is happening inside our heads. We can see the electrical voltage created by neural impulses in the brain. fMRI (Functional Magnetic Resonance Imaging) machines allows scientists to observe and measure changes in the oxygenation of blood which indicates differing levels of brain activity. We can now tell which areas of the brain are active, when they’re active, and then correlate that with the behavioural information we see.
The field of the brain and cognitive sciences is vast, and has effects that can be seen in our everyday lives in a myriad of ways. Right now, there is a ton of stuff going on where you are. You might be sitting down on a chair. There is other furniture in the room, maybe decorations on the wall. There is traffic going by outside, maybe the dishwasher is going in the background and someone is watching TV in another room, all while you’re reading this article. You can’t possibly pay attention to all of it at once, so your brain filters all of those external stimuli before you even perceive them, so you aren’t overloaded.
Your brain decides, for you, what is consequential. That traffic you hear outside? It can be ignored because it doesn’t present a threat. As Dr. Wilbiks says, cars rarely crash into houses so, while sitting in your house you are free to be unconcerned about the sound of cars. But when you’re walking down the street, the sound of cars is very important to you – if you hear the sound of a car accelerating as you enter a crosswalk, you should certainly turn your head toward that sound to see what’s happening, because now there is a real possibility of danger.
The brain creates these shortcuts to get maximum efficiency with minimum effort, something called “cognitive economy”. Because we use these shortcuts, and we process only the most pertinent information and take the most important cues, we are susceptible to “nudges”. These are little things cognitive science has created to make us behave a certain way. One example is being a safer driver. Dr Wilbiks gives an example:
“You’re on the road in your car, and you know you’re driving on a small residential street. You know the speed limit is 40 km/h, so you’re driving rather slow, and you’re paying attention. Looking for kids running out into the street, a dog running loose, a basketball rolling toward the road. If you’re driving on the highway, you’re paying attention to many other things, but you’re not looking for an errant basketball. Where we can use cognitive understanding and nudges is in a situation where, for example, people tend to be driving too fast on a residential street. We can do things to slow them down. You’ll sometimes see little islands, with a tree on them or something, that make the road a tiny bit narrower. It doesn’t get in the way of traffic, but the average person wouldn’t feel comfortable driving through that narrow gap at a speed greater than 40 km/h.”
The one thing that my brain focuses on, whether I’m in my car or working on my computer, is music. Anywhere it’s playing, in the background or from my neighbour’s garage way down the street, I can’t ignore it. Dr. Wilbiks says one of his favourite classes to teach is The Psychology Of Music, because it encompasses so many aspects of psychology in general, not just of cognitive science. He says,
“I usually scare students in the first few lectures because we start out with physics. The physics of sound, and sound waves, and how that works. Then it’s about how the vibrations get into your ear, how that translates up into your brain and how you perceive it. Then we get into developmental psychology – how do we develop our musical ability, the close parallels between musical development and linguistic development. We talk about treatment – there’s some really cool work being done using music (Melodic Intonation Therapy) to help people who have experienced a stroke recover some linguistic ability.”
We all have two hemispheres in our brain, and there is a part in the right hemisphere where we process music. There is a part in the left hemisphere that mirrors the music part almost exactly, and that’s where we process language. When people who have had a stroke sing the words they are trying to speak they can, over time, train the other hemisphere of their brains to process language again. Eventually they might regain up to 70-80% of their speaking ability as a result. People living with dementia are often able to connect with music in a way they can’t with language, as music can trigger memories that might otherwise be inaccessible.
Perhaps you’ve seen the meme going around that says, ‘my ability to remember song lyrics from the 80s far exceeds my ability to remember why I walked into the kitchen’. This might actually be true, as music from our youth remains a powerful trigger for memories which is why dementia support groups often sing songs together from their youth – Beatles and Beach Boys and traditional folk songs from the country where they were raised. In a few years, things might be a little different – in 2054, those support groups might be singing ‘Gin and Juice’ by Snoop Dogg. Will rap songs produce the same kind of memories and enhance language abilities in the same way? Only time will tell – but when we do have the answer to that question, it will be provided by cognitive psychologists and others in the field.