Music Perception vs Speech Perception
Previously I have surmised that music perception is actually a hidden component of speech perception, and music itself if a super-stimulus for this component of speech perception.
But what if music perception isn't a component of speech perception, but rather something that has evolved from a component of speech perception?
If music perception is not actually a component of speech perception, then music perception can only be the perception of music.
But we already know (or at least suspect), that music has no intrinsic meaning. In which case music perception is the perception of something that has no meaning. Which is kind-of like perceiving something that isn't anything.
I have also previously surmised that music perception results in an altered state of mind, where the listener has increased emotional responses to hypothetical or fictional information.
If music perception is the perception of something that isn't actually anything that matters, then maybe the only purpose of music perception is to induce the altered state of mind that occurs as a result.
It may seem pointless to have a system which perceives a completely contrived construct in order to achieve a particular altered state of mind, however there are certain consequences of requiring music as a prerequisite for achieving this state of mind, ie:
- The altered state can only be achieved temporarily.
- A certain amount of effort is required to achieve the altered state.
- The listener will always be aware that the altered state occurs as a result of listening to the music.
The Benefits of an Altered State of Mind
Why would the human mind benefit from this altered state of mind, ie one where emotional responses to hypothetical information are heightened?
And if it's such a good thing, why not be in that state of mind all the time?
Processing Hypothetical Information
One of things that distinguishes humans from other animals is the extent to which we process hypothetical information, ie information about things that lie beyond our immediate experience.
This hypothetical information derives from various sources:
- Things that are reported to us, which we might or might not believe to be true, depending on our opinion of the reliability of the reporter.
- Our own internal thoughts.
- In some cases, information which is presented to us with the caveat that it is not true, ie fiction.
One thing that has been discovered about the representation of hypothetical information is that the brain represents it with activity of exactly the same neurons as it uses to represent real and immediate information.
For example, if I wish to imagine something being yellow, "yellow" neurons will be active, exactly the same neurons which are active when I actually see yellow. (To be precise, colour is encoded in those parts of the brain that perceive colour using population encoding, so there are no specific "yellow" neurons, as such, only neurons which are more likely or less likely to be active if yellow is being perceived. But the principle applies in the sense that the same set of neurons effectively represents either imaginary or "real" yellow.)
The only difference is that the neural activity for imagined yellowness is less than the neural activity for real yellowness.
Somehow, the brain manages to perform computations about imagined percepts, with those imagined percepts represented as weaker versions of the activity that would represent the real precepts, and somehow it still gets the right answer when performing those computations with weaker "imagined" neural inputs and outputs.
So, what is different about emotion? Why is the default level of imagined emotion not enough?
My guess is, that there is something about emotional consequences of hypothetical information, such that the emotion can only be processed correctly if it is at a level that is the same as real emotion.
So on the one hand, there can be a benefit in increasing the level of neural activity which represents emotional response to hypothetical information.
On the other hand, the brain's representation of hypothetical information is distinguished from real information by the relative weakness of the corresponding neural activity.
So, if the representation of hypothetical emotions was increased to a higher level, for the sake of more correct processing, there is a risk that the brain will become confused, and it might act as if real emotions had actually occurred.
Music represents a possible compromise. Music allows emotional responses to hypothetical information to be increased, for the sake of better processing of those responses, but, the increase is only temporary, and, furthermore, the listener is aware that those increased emotions only occur within the context of listening to music, and the possibility of confusion is reduced.
A Plausible Evolutionary History
Music perception, as I have just described it, appears to be a complex system of information processing.
How could such a system evolve?
The most plausible evolutionary account would be one where this system evolved, initially via a single mutation, from some other existing complex system, one which previously did something else. (Depending on the details of the evolutionary scenario, the precursor system might or might not still co-exist with the current system of music perception.)
To find such a plausible scenario, we have to look in more detail at how music, and particularly musicality, is perceived.
Based on my previous analyses (as published elsewhere on this website, and in my book), the primary criterion for musicality is constant patterns of activity and inactivity in cortical maps involved in speech perception.
This can be summarised as follows:
- If constant patterns of activity and inactivity occur in cortical maps involved in speech perception, then,
- Emotional responses to hypothetical information are increased.
Only one change would be required to such a system to convert it into an aspect of speech perception, which is to replace the "constant patterns of activity and inactivity" with "constant patterns of activity", or just plain "activity".
So, a hypothesised pre-cursor to music perception is:
- If activity occurs in cortical maps involved in speech perception, then,
- Emotional responses to hypothetical information are increased.
So the intended "meaning" of such a system would be:
- If you are listening to someone speak,
- Increase emotional responses to hypothetical information.
In such a case, the hypothetical information might be derived from the speech itself, but it might also include information from the internal thought processes of the listener (as they listen to the speaker speaking).
The assumption underlying this proposed evolutionary history is that, at some point during the evolution of human ancestors, it was useful for an individual to have heightened emotional responses to hypothetical information when listening to speech, and then, at some later time, it was useful for an individual to sometimes have heightened emotional responses to hypothetical information, independently of whether or not someone was speaking to that individual.
(One can also speculate further that a precursor to the precursor may have existed even before anything like modern human speech existed, which constituted a responsiveness to measured "activity" in cortical maps relating to perception of con-specific communications, and which had an output which may or may not have been the heightening of emotional responses, but which involved the same system of neural-to-astrocyte and astrocyte-to-neuron signalling.)
Astrocytes, Neurons and "Gliotransmission"
I have previously hypothesised that the perception of "constant patterns of activity and inactivity" might actual occur within glial cells, and in particular astrocytes, with the results of such perception being re-transmitted (somehow) into the neural paths of computation in the brain.
One reason to suppose the involvement of astrocytes is that one of their most important roles is "housekeeping", which involves maintaining an effective operational environment for neurons, and this necessarily involves responding to the presence of by-products of neural activity, and by one means or another taking up or removing those by-products, which would otherwise interfere with the neural operating environment if they were allowed to accummulate excessively. So as a result of their housekeeping role, astrocytes are always "aware" of whether nearby neurons are active or inactive.
One of the recent developments in neuroscience is the discovery that glial cells are not just "housekeepers", but they also communicate information to and from neurons, with the word gliotransmission being coined to refer to communications from glial cells to neurons. The glial cells are not just receiving information from neurons such as "housekeeping needs to be done"; they are also sending information to neurons, altering how those neurons operate.
So the idea that music perception might involve glia-to-neuron communication (and in particular astrocyte-to-neuron communication) is not such an implausible idea.