The Big Question

The big question is: "What is Music?".

It seems an easy question to answer, because we are all so familiar with music. But being familiar with something is not the same thing as knowing what it is.

When I ask myself this question about music, I ask it as a scientific question. So I want an answer which relates music to our scientific understanding of the world. And I want an answer that makes predictions about music (and for bonus points, it should make predictions about other things besides music).

I don't know for sure that I've got any closer to answering this question yet. Over the years (and the decades), I've had some promising ideas. Some of those ideas still seem promising, and others now seem less promising.

I'm fairly sure that no one else has yet answered the question. The production of new music that people want to listen to continues to be more of an art than a science – which means that even the people making music don't really know what it is.

So, let me introduce myself. My name is Philip Dorrell, and I am a Music Science addict.

You can follow me, on my Periscope channel, user @pjdorrell. On Periscope you can watch live broadcasts of me discussing my ideas about what music is, and as part of the audience you can write comments or ask questions in real-time.

Current Hypotheses, Assumptions and Ideas about Music

Music Induces an Altered State of Mind.

The primary effect of music, and the primary biological function of music, is to induce an altered state of mind.

The Musical State of Mind Alters Our Responses to Daydreams, Especially Our Emotional Responses.

Listening to music creates a desire to partially disconnect from immediate reality (the "here and now"), and think about things that are not in our immediate reality, ie to daydream.

When we daydream, while listening to music, we more fully experience the emotions in our daydreams. This effect is most noticeable when we listen to the music that we most like.

Supporting Evidence: Many Daydreaming "Addicts" are Co-Addicted to Music.

So-called "Maladaptive Daydreamers" are a group of people who are self-reportedly addicted to daydreaming. For many of the these MD-ers, their addiction is either triggered by music, or, they are addicted to the combination of listening to music and daydreaming.

The Content of Music Has No Intrinsic Meaning. It is Only a Signal.

Music induces an altered state of mind, and the content of music is the signal which induces that state of mind. The content of music has no other purpose, and no other meaning.

Music is Very Similar to Language, and This Needs To Be Explained, But ...

The relationship between Music and Language extends across many individual aspects of both things, and it is implausible that the two phenomena are not related somehow.

However, we should not too eagerly jump to the conclusion that Music and Language necessarily share any particular purpose or function, such as "communication".

The Complexity of the Musical "Signal" Implies an Evolutionary Pre-Cursor.

If the content of music is a signal, and the effect of the signal is the only thing that matters, biologically, then the sheer complexity of musical content seems difficult to explain. I propose that the complexity of the musical "signal" can be explained by the hypothesis that the human response to music evolved from something else.

Given the observed similarity of music and language, that "something else" was most likely some component of the human response to language.

The Evolutionary Pre-Cursor: A Response to "Language-Like" Sounds.

Given the similarity of music to language, and the similarity of individual aspects of music to corresponding aspects of language, the response to music appears to be an altered version of an instinctive human response to simultaneous occurrence of different aspects of "language-like" sounds.

The Evolutionary Pre-Cursor: Initial Language Acquisition.

One time in life when a person would most benefit from a pre-programmed instinctive response to "language-like" sounds is at the very beginning of language acquisition.

When it's just starting to learn language, an infant needs to somehow "know", or perhaps to "discover", that there actually is this thing which is language, which needs to be learned and processed and understood as a thing in itself separate from all the other non-language stimuli in the world.

Also, eventually, the infant needs to learn that the contents of language can refer to things not in immediate reality.

The Evolutionary Pre-Cursor: Language-Like Sounds Induce an Altered State of Mind.

We can account for the mind-altering effect of music, and the evolution of music from a component of the language acquisition instinct, if we can show that language acquisition involves a similar altered state of mind.

For example, the requirement to process and understand language as something which is both important, and somewhat distinct from all other aspects of reality, can be aided if the occurrence of "language-like" sounds triggers an altered state of mind where there is a strong "connection" to those sounds and at the same time a partial disconnection from other aspects of reality.

And the requirement to learn that language can refer to things beyond immediate reality can be aided by a tendency to think about things beyond immediate reality.

(Whether or not any of this actually happens during the early stages of language acquisition is something that hopefully can be investigated scientifically by scientists studying infant language acquisition.)

Compared to Language, Music is More Constrained, and the Production of Music is Economically and Socially More Competitive.

Human languages have idenfiable syntax, and every fluent language speaker learns to freely generate and "perform" original and valid utterances according to that syntax.

Whereas, most people do not compose (ie "generate") music, or if they do, the results of their efforts are unlikely to be of interest to other people. Musical performance is also very difficult, and typically years of conscious practice are required to achieve a satisfactory result.

The intrinsic difficulties of musical composition and performance prevent music from being used as an effective means of communication, ie, music is not a language.

In Music, Certain Things Don't Happen.

Spoken languages have various forms of identifiable "melody", but these melodies do not live precisely on scales, like those in music. In music, pitch values between the notes in the scale do not occur.

Similarly, the distribution of beat frequencies in speech "rhythm" will be continuous, whereas in music the distribution will be discrete, corresponding to the bar tempo, the note tempo and a finite number of tempos related to those two tempos by simple integer multiples. For example, in the case of a 4/4 time signature with 16th notes, the tempos correspond to period lengths of one bar, half bar, one note, half note and quarter note, and other tempos won't appear in the beat tempo spectrum.

Music Perception Requires Patterns of Inactivity In Cortical Maps that Respond to "Language-Like" Sounds.

Normal speech will generate activity in cortical maps in the listener's brain, and some of these maps are substantially specialised for perceiving speech. The similarity of music to language means that music will cause activity in the same cortical maps, however the things that "don't happen" in music will result in constant regions of inactivity within those cortical maps.

This additional requirement for patterns of inactivity is one of the things that makes music more constrained than language.

The Constrainedness of Music Disables Music as a Language

Music has evolved from an aspect of the human response to language, but it serves a purpose distinct from that of language.

Because music is so much like language, there is a risk that the human infant will confuse music with language. To prevent this, it is necessary for music to be linguistically disabled.

The additional constraints which apply to music help to disable music as a language, and this disablement leaves the infant mind free to fully concentrate its language-learning efforts on actual language.

The Constrainedness of Music Limits Its Mind-Altering Effects

Music induces an altered state of mind. Presumably the altered state of mind is beneficial. But at the same time, presumably it is not beneficial to be in that altered state of mind all the time (otherwise it would just be the default state of mind, and there would be no need to have a special signal for entering that state).

The "difficulty" of composing and performing music effectively limits how much time any one listener can spend listening to music and being in the altered state. (Of course modern technology overcomes some of this difficulty, and, as a result, it may be that these days we all spend "too much" time listening to too much music, compared to what is biologically optimal.)

To learn more: These theories and hypotheses are all under active development. The best place to read about my latest ideas and hypotheses is at my What is Music blog.

More Miscellaneous Ideas And Hypotheses about Music

The study of music is part of biology.

Music exists because people create it, perform it and listen to it. People are living organisms, and biology is the study of living organisms.

A complete scientific theory of music should pass the "Billion-Dollar Yacht Test".

A scientific theory of anything should make predictions, and to be convincing these predictions should be quite specific and detailed, and not accidentally correct for some reason unrelated to the theory in question. For a candidate theory of music, this is most easily achieved by using the theory to generate unlimited quantities of commercially successful music (and, optionally, using the associated profits to purchase an expensive yacht).

The human brain is an information processing system.

An information processing system has four basic components: input, output, calculation and storage. Applying this framework to the analysis of music, music appears to represent the input. What kind of information is the output, and what does it mean? How is it calculated?

(If, as outlined above, music is a somewhat arbitrary signal, then we cannot determine that the content of music has any intrinsic meaning, beyond the fact that it constitutes a signal.)

Dance is an aspect of music.

We usually think of dance as something which accompanies music, but here I propose something stronger: dance actually is music.

Music perception is analogous in some way to speech perception, but speech perception is not just the perception of sounds: it also includes visual perception of the speaker's movements such as facial expressions, body language and hand gestures. Dance, and especially our response to watching other people dance, can be identified as being analogous in the same way to this visual component of speech perception.

There are at least five and possibly six symmetries of music.

These are:

  • Pitch translation invariance
  • Time translation invariance
  • Time scaling invariance
  • Amplitude scaling invariance
  • Octave translation invariance
  • Pitch reflection invariance
Each of these symmetries represents an invariance of some aspect of the perceived quality of music under the corresponding set of transformations.

For each symmetry we can ask "Why?" and "How?".

The first four symmetries are functional symmetries in that they satisfy a requirement for invariance of perception, i.e. for each symmetry in this group our perception of speech should be invariant under the set of transformations that define the symmetry. For example, perception of speech melody is invariant under pitch translation so that people with different frequency ranges can speak the same speech melodies, and have those melodies perceived as being the same.

The last two are implementation symmetries which play an internal role in the perception of music. (For example see the next item on octave translation invariance.)

For some of the symmetries the "how" question has an answer less trivial than one might initially assume. In particular it is not that straightforward to explain how the human brain achieves pitch-translation invariance and time-scaling invariance.

Octave translation invariance is an implementation symmetry which facilitates the efficient subtraction of pitch values.

Octave translation invariance is the result of splitting the representation of pitch into a precise value modulo octaves and an imprecise absolute value. This split enables the more efficient representation and processing of pitch values, particularly when one pitch value must be "subtracted" from another to calculate interval size.

Our perception of relative pitch must be calibrated somehow.

The requirement to calibrate relative pitch perception explains the importance of consonant intervals in music perception. Consonant intervals correspond to the intervals between the harmonic components of voiced sounds in human speech, and they provide a natural standard for calibrating the comparison of pitch intervals between different pairs of pitch values. Our accurate ability to calculate and compare pitch intervals enables the pitch translation invariant perception of speech melody.

The question, "What is Music?", and its many answers ...
Historical Interest Only: The Superstimulus Hypothesis

My initial "big idea" about music was the Superstimulus Hypothesis, that music is a super-stimulus for some aspect of speech perception.

This theory drove much of my original investigation, and resulted in the development of many specific hypotheses about specific aspects of music.

I wrote a whole book about it.

Yet, I no longer believe that hypothesis to be the correct answer, and it is effectively replaced by the hypothesis outlined above, that music is a signal which induces an altered state of mind, where, possibly among other things, the emotional response to daydreams is intensified.

But, in the course of attempting to fully develop the Superstimulus Hypothesis, I discovered and developed many associated ideas and hypotheses about music, some of which I consider remain relevant.

What is Music? (front cover)
(Paperback, 324 pages, 6" by 9")
View of back cover

(Note: this historical section has not been edited – it contains the original blurb for my book about the super-stimulus theory.)

What is Music?: Solving a Scientific Mystery is a book by Philip Dorrell which explains a new scientific theory about music: the super-stimulus theory.

The main idea of the theory is that music is a super-stimulus for the perception of musicality, where "musicality" is actually a perceived property of speech. "Musicality" refers to the property of music that determines how "good" it is, how strong an emotional effect it has, and how much we enjoy listening to it.

The theory implies that ordinary speech also has this property, in a manner which may vary as a person speaks. The musicality of speech is much more subtle than that of music, but it provides important information which the listener's brain processes (without conscious awareness of the processing), in order to derive some information about the internal mental state of the speaker. This information is applied to modulate the listener's emotional response to speech, and this accounts for the emotional effect of music.

What distinguishes the super-stimulus theory from all other serious attempts to explain music scientifically is that it starts from a simple assumption that music perception must be an information processing function, and this assumption results in quite specific explanations of how major aspects of music such as scales, regular beat and harmony are processed in the brain. It is the first theory to explain the perception of musical scales without a priori assuming the existence of musical scales. (The theory has to do this, because it is a theory of music perception as an aspect of speech perception, and musical scales do not occur in normal speech.)

The book is now available as a free download.

 Copyright © 2006-2015 Philip Dorrell