# Algorithmic Music in the view of information theory

From studying English text, Shannon declared a new concept, “H”, i.e. the entropy of a message, or the information. This makes me think of music which could be regarded as another kind of language. In this short article, I’m going to talk about “algorithmic music”, an emerging field of producing music by algorithm on the computer.

1.Composing

The basic operating principle is easy to follow.

1.1 Deconstruction

Imaging we going to compose a piece of “Beethovenish” piano Concerto, the first thing we need to do is to broke the 32 pieces of Beethoven Concerto into mini pieces, tag them and let the computer “study” these works.

1.2 Recombinancy

Similar with digitized speech, the procedure for a computer to produce a new melody with its data base is stochastic and is neither deterministic nor random. The note in the melody will be determined not only by the genre of the work but also the note right before it. Just like the digram “th” can frequently be find in English text, in music language, notes that can form harmonic intervals often show up together while it is always expected to hear a perfect cadence in the end of a paragraph. In this way, the task of recombine the notes to produce a new melody is replaced by doing a series of “yes” or “no” questions. With the given note A, the computer will first check note B. If the answer is “yes”, it will move on to the third note; otherwise, it will successively check note E, F and G until the answer is “yes”. When note D is suitable for the second place but the answers for H and I are both negative, the computer will jump note D to check note D.

2.Listening to music from a CD player

There are two ways for us to listen to music: alive and recordings. The communication system of listening to music from a CD Player is more complex than from a concert: the laser diode sends out laser to the disc while the photo diode receive the reflected laser and record the result as either “0” or “1”; then the player decode these “0”s and “1”s and transmit these electronic signals into sounds; hearing the sound, we may have different feelings from each other. According to Paolo Rocchi, “information always has two parts – sign and referent. Meaning is the association between the two”. A common sign-referent model in music is works written in majors carry a motion of “bright / grand”. But as for the understanding of a whole piece of work, different people have different sign-referent system thus resulting in different feelings.

Like English language, music also contains redundancy. In his paper, Weaver mentioned that redundancy is determined not by the free choice of the sender, but by the accepted statistical rules governing the use of the symbols in question. Leaving out some of the unimportant notes in the work will not change the meaning of the whole piece.

Question:

When doing background searching, I found that scholars from different fields have different answers of “what is information”. This reminds me of what Saint Augustine said: “What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.” Is information the same as time that cannot be explained?

Reference

James Gleick(2011), The Information: A History, a Theory, a Flood. New York, NY: Pantheon

Peter Denning and Tim Bell(2012). The Information Paradox. American Scientist

Martin Irvine, Introduction to the Technical Theory of Information

Warren Weaver(1953). Recent contributions to the mathematical theory of communication. A Review of General Semantics