The Super-Stimulus Theory
The super-stimulus theory of music is a theory I have developed which explains what music is, how we respond to it and why we respond to it.
Very briefly, it states the following:
- Music is a super-stimulus for the perception of musicality.
- Musicality is a property of normal speech, which is caused by subtle alterations to the rhythm and melody of speech. These alterations arise as a function of the current level of consciousness of the speaker.
- The listener's brain interprets the perceived musicality of speech by attributing greater significance to what the speaker says, and this alters the listener's emotional responses.
The super-stimulus theory contains a specific hypothesis about how "musicality" is perceived in the listener's brain, which is:
- Perceived musicality is a function of the occurrence of contrast between inactive and active regions within constant activity patterns occurring in cortical maps which respond to speech.
Given this hypothesis, music can be explained as a contrived form of speech which maximises perceived musicality, i.e. it causes constant activity patterns to occur within cortical maps responding to speech, and it maximises the contrast between active and inactive regions.
The theory does more than just explain music, it also explains the individual features of music: different cortical regions represent different components of information about speech in different ways, and for each such region, the requirement that perceived musicality be maximised corresponds to a specific feature of music.
Practical Application to Music Composition
In principle it would seem straightforward to apply this theory to the composition of new music:
- List the cortical regions included in that portion of the cortex where musicality is perceived
- For each region determine how it represents information about speech
- For each representation of information, determine how the perceived musicality can be maximised, i.e. what form does speech need to take in order to cause the occurrence of constant activity patterns with maximum contrast between active and inactive regions. Formulate each maximisation constraint as a musical rule.
- Compose music that maximises the satisfaction of these rules.
It should be straightforward, but the difficulty lies with steps 1 and 2, i.e. the construction of mathmatical models of how different cortical regions represent information about speech. All the models that I developed to support my theory were deduced heuristically from existing known features of music. For example, I explained the use of scales in music by developing a model for the "scale" cortical map, a cortical map which responds to the occurrence of melodies with pitch values taken from a scale. To be plausible, the model had to perform useful processing of non-musical melodies from normal speech. So the "scale" map had to perform processing that was relevant to the perception of normal speech melodies (which are not constructed from scales).
I constructed models like this for scales, chords, nested regular beat and repetition. These models provided a very strong plausibility argument for the theory. (One could argue that this plausibility argument is circular, however I developed the models before I developed the constant activity pattern hypothesis.)
The Three Categories of Musical Rule
However, the models I constructed do not help anyone to construct completely new features of music, because to do that, we need to construct the models before we can construct music with those new features. To understand the importance of these cortical information representation models to the discovery of new music and new types of music, we need to consider the types of musical rules which (according to the super-stimulus theory) correspond to those models. Actually, if the super-stimulus theory is correct, then there are three major categories of musical rules:
- Known rules which describe known music
- Unknown rules which describe known music
- Unknown rules which describe unknown music
Traditional "music theory" contains rules in category 1 only. Category 2 rules could in principle be deduced from the body of existing music, and they represent the "gap" between what the category 1 rules tell us and what we need to know to generate new music similar to existing music and of the same quality. To believe in the existence of category 2 rules it is not necessary to believe in the super-stimulus theory; it is only necessary to believe that the universe we live in is has a rational and comprehensible nature.
The existence of category 3 rules is somewhat more speculative. But if we can identify the historical discovery of even one aspect or feature of music which adds a new rule to the rules of music, then it becomes plausible that there could be even more waiting to be discovered.
Coming back to the cortical information representation models that I constructed while developing the super-stimulus theory, we can say that all of those models relate to category 1 rules. For the theory to make hard predictions which can be directly verified against music (as opposed to, for instance, verified against intrusive measurement of brain activity in someone listening to music), it is necessary to incorporate either category 2 or category 3 rules, together with the cortical information representation models corresponding to those rules.
Which, unfortunately, I have not managed to do so far.
Given this limitation in the current state of the theory, one might wonder if the super-stimulus theory can be of any practical use at all.
What the Super-Stimulus Theory Does Tell Us
I would argue that, despite its limitations, the theory does tell us some useful things, including the following:
- It is music perception that has evolved, not music. Therefore, all music is discovered by someone whose brain is already wired to respond to that music.
- Musicality derives from a general principle applied to different specific cortical representations of information about speech. One of the mysteries of music is: which is the "essential" feature of music, for example is it pitch or is it rhythm? The super-stimulus theory solves the problem by proposing a more generic "meta-feature" which generates all the specific features. The practical implication of this is that completely new features of music may be awaiting discovery, representing cortical regions not currently "exploited" by existing forms of music.
There are other more technical reasons why we should believe that there are radical new forms of music still waiting to be discovered:
- Musical "rules" are determined by the constraint to maximise perceived musicality within specific regions. In many cases different rules will contradict each other, i.e. it is not possible to satisfy all the rules, and each musical composition must "pick and choose" which rules it is going to satisfy. Even if the body of existing music is already exploiting all possible rules, it may still be possible to discover new ways of compromising, i.e. satisfying one rule slightly less in order to satisfy another rule much better.
- The super-stimulus theory suggests that our enjoyment of watching dance is partly due to the visual perception of musicality, corresponding to the occurrence of constant activity patterns in those cortical regions responsible for the visual perception of speech, i.e. when the listener watches facial expressions, gestures and body language. If this is true, then it is an example of a feature of "music" radically different from all other features of music, since it doesn't even involve sound.
- Playing melodies based on a scale is easy when you have a musical instrument constructed to play notes from the scale. You just play some notes up and down the scale, and you almost have a melody. But there may be more complex ways of satisfying pitch-based rules (which would relate to cortical regions close to the region that responds to scales), which have not been discovered yet due to their complexity and due to the fact there is no simple way to implement them on a musical instrument. There may be similar more complex variants of other musical features waiting to be discovered.
A final reason to believe that completely new types of music can be discovered is to look at the history of music, and to realise how many innovations have occurred very recently, i.e. "recently" when compared to the time-scale of human evolution.
For example, the electric guitar has given rise to a whole new genre of music which cannot be effectively played on any other instrument, because it depends on the ease of bending the notes and on all the amplification and distortion effects that can be applied to the output of an electric guitar. Whatever it is in our brains that responds to the more radical forms of electric guitar music, it was there all along, but it was never activated until that kind of music was invented and played.
How to Search: Program and Interact
What I hope I have shown so far is that there are good reasons to believe that there are new and significant methods of creating musicality which have not yet been discovered.
The super-stimulus theory suggests several possible approaches to finding new musical ideas:
- Try making new patterns of sound (this suggestion doesn't depend much on the details of the theory, but the theory tells us it is worth trying because there may exist types of music completely different to what we already know about)
- Try to invent new musical rules, and use them to derive new patterns of sound (most "rules" won't be musical, but the theory suggests that it is worth looking because there may exist musical rules either slightly different or a whole lot different to the ones we already know about)
- Create speculative models of how the brain represents aspects of information about sound and speech, from the models derive new musical rules, and from the rules derive new patterns of sound that should be musical
Whichever approach is taken, a lot of trial and error seems to be involved, which promises a lot of work with possibly only a small chance of success.
At this point I would like to suggest an analogy with a gold prospector, in particular a gold prospector who doesn't even know much geology, other than to know that there is gold out there somewhere. If I was giving advice to such a prospector, I would suggest the following:
- Find ways to efficiently search many areas very quickly for signs of gold.
- If you find small traces of gold in one place, try a few other places nearby.
- Don't spend too long looking in one place where there isn't any gold at all.
- Don't kid yourself you've found gold when really you haven't (i.e. beware of fool's gold).
To translate this gold prospecting metaphor back into practical advice on music experimentation, I would start by suggesting the use of a music development environment that makes it as easy as possible to try out many different ideas about sounds and patterns of sound as quickly as possible.
The Music Development Environment
There may have been a time when a piano was the best choice for experimenting with music. But nowadays the computer is the most powerful tool available to the experimental musician. However, although computers can do a lot of things, there are still some things a person can do which a computer can't. One of the things the computers can't do yet is make judgements about what is musical and what is not musical (indeed, finding out how computers could make those judgements would be equivalent to solving the mystery of music).
So the best system for processing music and for searching the large space of possible new music is one that makes the best use of the capabilities of both person and machine. The system must maximise the speed and flexibility of interaction between person and computer. An experimental system should make use of as many possible technologies for person-computer communication as it can. For example, in the direction from person to computer:
- Programming
- Physical controllers
- Midi keyboard
- Connecting musical instruments electronically
- Recording sounds (from any source, including voice, instrument or other sounds)
And in the opposite direction:
- Sound output through speakers
- Screen display
Of all these interaction technologies, programming is the most important one to consider, because programming is the only way to communicate ideas to a computer, and if you only use other people's programs, then mostly you will only be communicating those people's ideas to the computer.
There's a good list of music and audio programming languages here in Wikipedia. But don't restrict yourself to special music programming languages. So-called "high level" languages are those that are specialised for the expression of abstract concepts, and if you want a programming language that you can almost "think" in, then that's what high-level languages are for. Some of the most widely used high-level languages that support different styles of programming are Python, Ruby, Ocaml, Haskell, Mozart, Prolog and Lisp. There remains the problem of connecting programs written in a general purpose language to your hardware, but these days there are plenty of ways to connect different programming languages to each other and to hardware (especially if you use open-source languages and open-source operating systems).
Other Advice
Don't spend too much time perfecting the production of one piece of experimental music. With most new music, you can tell if it's any good as soon as you hear it. And even if you can't, you don't want to spend too long evaluating the result of each experiment. If 99.99% of musical experiments fail to produce a positive result (and this may be an optimistic estimate), then you need to do 10,000 experiments to find one genuine new musical idea. At a rate of 30 experiments a day, that would take 11 months.
A corollary to this is that you should not spend a lot of money on high production values. Your sound system needs to be good enough to reproduce normal music, and the sound card that comes with any home computer is probably more than good enough. When your experiments do produce a positive result, the musical benefit will not be at all subtle – it will be totally obvious that you have created something worth creating, and you won't need $5000 studio monitors to tell you that.
If you have money to spend, spend it on different ways to interact with your development system. If your task is to perform 10,000 different experiments, you don't need one really high quality input keyboard, what you need is as many possible different input devices as possible, just to play around with.
And finally, when you are half-way through creating 10,000 experimental music items, you may get tired and you may be tempted to decide that you are going to like your own new music, even if it isn't really any good. If this is a problem, then you need the help of another person to provide secondary judgement. For example, find a friend who is willing to listen to one musical experiment a day, and each day send them the item that you think is best out of the musical experiments you did on that day. Your friend can help you in two different ways: firstly to tell you that you haven't yet found what you are looking for, and secondly to tell you to keep looking anyway.