What happens in your brain when you listen to music?
- In any particular item of music, certain things happen, and certain other things don't happen.
- In your brain, corresponding to your perception of the things that happen, there are areas of neural activity.
- And, corresponding to the perception of things that don't happen, there are areas of neural inactivity.
- In the border zones between the areas of activity and inactivity, special musicality-detecting brain cells respond to the contrasts between activity and inactivity. (These "musicality detectors" may or may not be neurons, for more details see my article Musical Genomics and the Musical Astrocyte Theory.)
- The musicality detecting cells respond to the contrasts between activity and inactivity by release of some particular musical neurotransmitter. (Some neurotransmitters are signalled directly from one neuron to another, others are propagated more indiscriminately, which is probably the case for the "musical" neurotransmitter.)
- Receptors for the musical neurotransmitter are found on neurons in those areas of the brain which process emotional feelings. This accounts for the effects that music has on emotion.
- The pleasurable effect of music is most likely an indirect consequence of the emotional effect, i.e. any circumstance which causes intense emotional feelings which are known not to be "real", causes pleasure.
Particular examples of things that "happen" and "don't happen"
Pitch values in and not in a scale
Consider a melody which is constructed from notes in a particular scale. The scale consists of certain pitch values, and it is these pitch values which "happen". Other pitch values, i.e. those values in between the pitch values of the scale, "don't happen".
A simple theory of how pitch values are represented in a neural map (a map is just some area of the brain that represents one type of information in an organised fashion) is that they are represented by neural activity in a tonotopic map, where a map is tonotopic if there is a simple linear relationship between perceived pitch and the position of active neurons in the map. Thus if the note C# is in between C and D on the scale, its representation in a tonotopic map will be in between the representations of C and D in the same map.
There is good evidence that there are one or more tonotopic maps representing pitch values in the brain. However, in relation to the perception of music, there are two reasons to believe that this simple representation of pitch does not account for musicality within the "things that happen"/"things that don't happen" theory:
- Firstly, perception of musicality is independent of absolute pitch, whereas each musicality-detecting cell in a tonotopic map would always be responding to contrasts between activity and inactivity at a position representing particular neighbouring pitch values.
- Secondly, the theory fails to explain tonality, i.e. the requirement that musical scales should be irregular and that there should be no repeating patterns within each octave.
In principle the first objection can be dealt with by assuming that musicality detecting cells are very evenly distributed over the tonotopic map. However the second object is more serious, as it fails to explain why music constructed from a 12-note chromatic scale isn't musical. (You might have heard of "twelve tone" and "atonal" music, and you might think that it is possible to construct music from such a scale. In which case I suggest you go down to your local lending library and borrow some CDs of atonal music, and you will be able to discover for yourself that such "music" is not very musical.)
The explanation of a musical scale as pitch values which "happen" versus pitch values that "don't happen" can be rescued from these difficulties if we assume that there exists some neural map which assigns each pitch value in a scale to a position in the neural map in accordance with its relationships to other pitch values in the same scale. Within such a neural map an irregular scale containing N notes will cause N distinct areas of activity within the map, with N corresponding borders between those active areas and other inactive areas in the same map. Whereas a perfectly regular scale would only give rise to one distinct area of activity, since each note within the scale has the same set of relations to other notes as each of the other notes does, and as a result there would be just one corresponding border region.
(The more thoughtful reader will notice that I have not gone into any detail as to how such a representation is achieved, which is partly because I am not yet able to determine the full details. However it appears to be related to the process which determines the home chord for a scale, which in turn appears to be determined by the occurrence or non-occurrence of consonant relationships between each pair of notes in the scale.)
It is often suggested that a musical scale represents a categorisation of pitch space. (Unfortunately I'm not too sure who can be originally credited with this suggestion.) For example, a seven note scale categorises all possible pitch values into seven categories. The "things that happen"/"things that don't happen" theory also implies that a scale categorises pitch values, but it tells us that a scale categorises all pitch values into just two categories: those pitch values which occur in a tune, and those pitch values which don't occur.
Consider a typical tune which is played in 4/4 time, i.e. 4 quarter notes to a bar, and where individual quarter notes may also be divided into halves or quarters. Within the rhythm of such a tune, we can identify five separate frequencies of regular beat, i.e.:
- 1 beat per bar
- 2 beats per bar
- 4 beats per bar
- 8 beats per bar
- 16 beats per bar
Other regular beat frequencies, such as 7 beats per bar, or 2.39 beats per bar, do not happen. Thus we have another example of things which happen and things which don't happen. To account for how this gives rise to contrasts between activity and inactivity within a neural map, it is only necessary to assume that there exists a map which represents regular beat frequency according to a linear relationship between beat frequency and position (or more precisely, between log frequency and position).
Within a given chord, the things that happen are the notes in the chord, and the things that don't happen are all the other notes (or all other possible pitch values). Unfortunately this simple explanation does not account for why chords tend to consist of notes which are related to each other by consonant pitch intervals. For example, the chord of C major contains the notes C, E and G, and each note is related to the other notes according to (approximately) simple frequency ratios, since the ratios of frequencies C : E : G correspond to the ratios 4 : 5 : 6.
This observation suggests an alternative description of what is "happening" in a chord, which is the occurrence of notes consonantly related to each other. For example, each of the three notes in the chord CEG is consonantly related to each of the other three notes in the chord (if we count each note related to itself), and there are no other notes in the scale related to all three notes, because any other notes in the scale are only consonantly related to at most two of the notes in the chord.
This explanation of chords is a little different from the explanation of scales and regular beat, because chords do change, even in simple musical items. However it is observed that chords do not change so often in most popular Western music, for example not usually changing more than once per bar, and they often last several bars before changing.
Other aspects of music
There is more to music than just regular beats, scales and chords. Yet the ability of the "things that happen"/"things that don't happen" theory to account for these three aspects of music suggests that this might be the much searched-for music universal.
If this is the case, then it implies that all aspects of music correspond to things that happen and things that don't happen, and that to fully understand what makes music musical, we must understand how all musically relevant information is represented within neural maps. In particular, for each neural map we must understand what type of information is represented by activity in that map, and we must understand the correspondence between positions of neural activity and the information values perceived and processed by the map.
The purpose of musicality perception: music versus speech
Of the three musical aspects discussed in detail above, two of them relate to aspects of music which have obvious analogues in speech perception, i.e. scales relate to melody and regular beat relates to rhythm. Speech has both melody (AKA prosody) and rhythm, and the perception of speech melody and speech rhythm play important roles in the perception of speech. However, the "things that happen"/"things that don't happen" contrast does not occur in normal speech. Speech melody is not constructed from scales and pitch values are not held constant, rather they tend to move continuously up and down. Speech rhythm does not contain any fixed set of regular beats. So if we consider the response of the neural maps which respond with corresponding active and inactive zones when perceiving music, these fixed (or relatively fixed) patterns of activity and inactivity will not occur in response to normal speech.
If musicality doesn't occur in speech, then what is the purpose of the perception of musicality? Is it to perceive music? One possibility is that musicality does appear in normal speech, but at a much lower level. Thus the real purpose of music perception is not to perceive the musicality of music, but to perceive the much more subtle musicality of normal speech. When a person listens to speech, contrasts between more active and less active areas will still occur, although the contrasts will be both smaller and less persistent than what occurs when listening to music.
What kind of information would this perception of the musicality of speech represent? Because it seems to relate to brain activity, independently of which particular kind of information is represented within different brain regions, we might guess that it has to do with the perception of some aspect of mental state. But why does the listener need to know something about some aspect of their own mental state? Surely it would be much more useful to learn something about the mental state of the speaker. And if musicality perception is a perception of some aspect of speech, surely it must be perception of some aspect of the speaker's current state (mental or otherwise). We can bridge this gap between what appears to be perception of listener's brain state and a more biologically useful perception of speaker's brain state, if we assume that patterns of activity in the listener's brain "echo" patterns of neural activity in the speaker's brain. Even if this "echoing" is very faint and noisy, it might be sufficient to give the listener information about the internal mental state of the speaker which can assist the listener in the interpretation of the speech being spoken by the speaker.
The emotional effect of music can be explained as a consequence of this "assisted interpretation" – in other words, whatever it is that perceived musicality tells the listener about the mental state of the speaker, the implication is that the listener's emotional response to what the speaker says should be amplified, if the listener is currently perceiving a high level of musicality in the speaker's speech.
Of course when I say "high", I do not mean as high a level of musicality as is found in music itself. As I have already stated, perceived musicality of speech as a measure of someone else's brain state is likely to be a very subtle percept (which must be combined from measurement across as many neural maps as possible), and the corresponding emotional effect on a listener's perception of normal speech will be correspondingly subtle.
The "harmonic" perception of speech
The third aspect of music analysed above (after scales and regular beat) was chords, and it is less obvious how the perception of chords relates to the perception of speech. In particular, speech melody consists of just one pitch value at any particular moment in time, whereas chords consist of simultaneous pitch values. Chords are generally regarded as an aspect of harmony, and "harmony" is defined to be the occurrence of simultaneous pitch values in music.
One clue which can help us to solve this mystery is that harmony can be perceived sequentially. For example, we can play the notes of a chord such as C major in sequence, i.e. C then E then G, and we can still perceive the chord. This suggests that the neural "harmonic" map responsible for the perception of the relationships between the constituent pitch values of a chord is a neural map which is not too fussy about whether those pitch values occur at the same time or not. This failure to distinguish sequential from simultaneous can be explained by assuming that the relevant neurons in the map are somewhat persistent in their response to a pitch value. The response to notes in a chord can be explained by assuming that neurons in the harmonic map responding to consonantly related pitch values mutually reinforce each other, and neurons responding to pitch values not consonantly related inhibit each other. Also, to account for the observation that chords tend to change at the beginning of a bar, we can assume that the persistent response of these neurons is prone to being extinguished by a strong beat.
Thus the real purpose of harmonic perception is to perceive the relationship between pitch values occurring at different times, which presumably helps the brain to perceive and identify the components of speech melody (and in a manner which is independent of absolute pitch). That the relevant neural map should happen to respond more strongly to simultaneous pitch values is just an accident of how that map operates, and is not a result of any design (whether deliberate or evolutionary) for the purpose of perceiving simultaneous pitch values.