The field of music psychology took a huge shot in the arm in the 1990s with the introduction of brain-scanning technology. Psychologists had been writing seriously about music since the 1950s, but now with machines such as MRI (magnetic resonance imaging) - scanners which show when different parts of the brain are being used - neuroscience was dragging music psychology with it.
And yet, Stewart says, the two disciplines (cognitive psychology and neuroscience) remain very separate when it comes to music. Cognitive psychology is more interested in what happens, as opposed to where in the brain it happens. "I go to some music cognition conferences and I go to the music neuroscience ones and they're quite different groups of people to be honest. But, of course, I think the best stuff happens when you're testing a framework or approach from music cognition and then working out how it's implemented in the brain.
"I think you need a good reason to be doing brain imaging research. I'm always interested to know what question it's really addressing. I think we've gone past the time when it was good enough to just show that different brain areas were involved when listening to music because of course they are and everybody knows that really."
Stewart's involvement in music psychology was quite accidental. Following a masters in neuroscience at Oxford she took a research position working for Uta Frith at UCL (University College London), a German developmental psychologist made a Dame in 2012 for her work on autism and dyslexia. Stewart wanted to study how literacy changes the brain for her PhD, but finding test subjects proved to be difficult. So she used another form of symbolic notation - music.
She taught participants to play the piano to grade one level, scanning them (whilst they looked at musical notation) before and after the learning. As a result, Stewart was among the first researchers to prove the plasticity of the brain - meaning that the brain will change its structure depending on how it is used - a landmark finding which would resonate throughout the field and beyond. Here was physical evidence that you could make yourself a smarter, better, human; that sometimes we have more control over our biology than we realise.
"All this happened", says Stewart, "at a time when, internationally, music and the brain was a new hot topic. Traditional areas in cognitive neuroscience are language, attention, movement, numeracy, but no one had really been looking at music. People thought it was really quite an interesting domain that had been overlooked. It's not quite like language, it's not quite like vision, it's something else. And it started to grow."
In 2000, the Mariani Foundation was set up with the explicit purpose of bringing together scientists to further the study of music psychology. Stewart was invited to New York for the first conference, a meeting of minds which now takes place every three years in various locations.
After completing her Phd she began working on congenital amusia (commonly known as being tone deaf) with an an expert in auditory perception, Prof Tim Griffiths. This, in turn, led to a fellowship at Goldsmiths University, London, in the area of neuroscience and music for which she helped develop the masters programme, "Music, Mind, and Brain".
One of the current students is Iris Mencke who is investigating what happens when we listen to more experimental abstract music.
"I want to find out what sort of emotions are evoked by this? Do people who enjoy this kind of music develop something like a pleasure for the abstract? Also, are we able to learn, or get used to, the dissonant and a-rhythmic style, just as we get used to classical tonal musical languages?"
Her ultimate aim is to get a better understanding of the perception of art and music of the 20th century.
Meeting of Minds
The ability to measure activity in the brain has not only led to medical advances but, as the technology becomes cheaper and more portable, composers have begun to incorporate it into the art itself - from Stanford professors Chris Chafe and Josef Parvizzi converting the electrical spikes from the EEG of a seizure patient into sound (see title screen) to 'Raster Plot', composed by Eduardo Miranda, professor in ‘Computer Music’ at Plymouth University, UK, which uses rhythms generated by a computer simulation of a network of neurones (each neurone corresponding to an instrument) to mimic the way the brain encodes information.
Another of his compositions, ‘Corpus Callosum’, used material from Beethoven’s 7th symphony along with fMRI data from his own brain while he listened to the same symphony to replicate the relationship between the two sides of the human brain.
But it doesn’t stop there. In 2015 he became the first person to create a biocomputer system to make music by growing mould on a circuit board. As music is played, the mould responds by moving, thereby creating electrical activity which Miranda’s system then converts back into sound.