In this blog post we will look at generative music, what it is and different ways in which it has been used in interactive media like video games and musical toys. We will have a closer look at three different musical toys that uses different approaches to interactive generative music: Patatap, Plink and Tonfall Sequencer.
The term generative music can often time be a little confusing as different people have different definitions as to what it is. In broad terms, generative music is an algorithmic approach to the generation of music, which does not really clarify anything. Wooller et. al. (2005) exemplifies the confusion around the term generative music, and they look at some of the different meanings this term has been given:
Linguistic/Structural: Music created using analytic theoretical constructs that are explicit enough to generate music (Loy and Abbott 1985; Cope 1991); inspired by generative grammars in language and music, where generative instead refers to mathematical recursion (Chomsky 1956; Lerdahl and Jackendoff 1983).
Interactive/Behavioral: Music resulting from a process with no discernable musical inputs, i.e., not transformational (Rowe 1991; Lippe 1997, p 34; Winkler 1998).
Creative/Procedural: Music resulting from processes set in motion by the composer, such as “In C” by Terry Riley and “Its gonna rain” by Steve Reich (Eno 1996).
Biological/Emergent: “Non-repeatable music” (Biles 2002a) or non-deterministic music, such as wind chimes (Dorin 2001), as a sub-set of “Generative Art”.
(Woller et.al., 2005, p. 1)
Wooller et.al. further argues that the function of algorithmic music can be seen on a continuum that ranges from analytical, through transformational to generative. These terms are applied to describe the “effect of the process upon the data applied to it” (2005, p.8). They claim that an algorithm is generative when “the resulting data representation has more general musical predisposition than the input, and the actual size of the data is increased” (Wooller et.al, 2005, p. 9). This means an algorithm that outputs more information (music) than the initial information given (input).
Brian Eno, sometimes refered to as the person who coined the term ‘generative music’, defines it as a specific set of rules that will generate unpredictable music in real-time (Eno, 1996). Eno describes the creation of music on a continuum from what he calls classical music to generative music. In classical music an “entity” is specified in advanced, then it is built (Eno, 1996). For example, when writing down a score, different aspects such as pitch value, dynamic value, orchestration and playing technique can be specified, then musicians will follow these specifications in creating the music. This is opposed to generative music, where the specified rules generates the music. That is, there is no set score to follow, but rather a set of rules that defines what can possibly happen within the music. This means one can never be entirely certain of what will happen, unless the rules are very specific – in which case you might say it is not truly generative.
Applying the term classical for one end of the continuum may lead people to misunderstand what it actually means. The term “classical music” will likely give connotations either to the term as an umbrella term for art music, as a term describing late 18th and early 19th century classical music or as a term describing classical forms (i.e. sonata form). This is unfortunate, as it does not seem Eno only means classical music,but rather music in where the outcome is more or less pre-determined by the set of instructions given. By such, terms like pre-determined or fixed could be better suited, but these might also give the wrong connotations: Even though the outcome of generative music is not pre-determined, the rules that governs it are; even though the instructions (i.e. the score) for a performance are fixed, you can never truly know the exact outcome unless it is from a recording or completely computer based music. In all, it might not be particularly fruitful to see music as more or less generative or the opposite, but if we are to do so, the best way of describing the opposite of generative might just non-generative music.
Generative, or procedural, music was commonly used in early arcade games. Because there was not enough memory to store and playback pre-recorded sounds, both the sound effects and music was usually synthesised in real-time (Hamilton, 2015). Some other early examples of generative music used in games would include Ballblazer (LucasFilm Games, 1984), which uses an algorithm that the composer, Peter Langston, refers to as a riffology algorithm (Collins, 2009). The algorithm has a choice of 32 eight-note riffs, or melody fragments, to choose from, and it will take decisions like which riff to play next, whether or not to omit any notes from the riff, and further, if that should be done by prolonging another note or putting in a rest. Otocky (ASCII Corp, 1987), with music by Toshio Iwai, is another example of generative music in early games (Collins, 2009). Iwai later moved on to working with other games, or musical toys, that uses generative music, like SimTunes (EA, 1996) and later Electroplankton (Nintendo, 2005). Collins (2009) argues that these cannot really be seen as games as they have no set objectives, rewards or in-game narratives, therefore it is more fitting to call them musical toys. In SimTunes the player paints a picture, where each colour represents a musical tone. After that, the player places Bugz, which represents instruments, on the picture, and as they move over the different colours, music is played. In Electroplankton, the player can interact with different plankton on the screen, which output different music. Many different musical toys have been developed, and in the following we will look at three that uses different approaches to interactive generative music.
Patatap is a musical toy that is available on Android and iOS as well as being an interactive website. The creator, Jono Brandel, describes it as a “portable animation and sound kit”. In the website version, the user can press different keys on the keyboard which output sounds accompanied by visuals shapes on the screen. Keys A to Z outputs sounds and shapes, while the spacebar changes the visual layout and the sounds. As there are quite a few keys to choose from, each containing individual sounds, it takes some time to properly learn how it works in order to create music that does not sound like a selection of random sounds. It might work better when using a touch screen, but as this blog post is focused around websites rather than applications this has not been tested. However, it seems like the system has been quite successful, as it has been used as a part of different installations and performances (Brandel, 2014).
Plink is described as “a super intuitive multiplayer experience” on the website. It is a musical toy in which different players from around the world can create music together. The design is very intuitive: There are 15 horizontal lines running down the screen, and clicking the mouse within any of those lines outputs a pitch. In the sidebar the user can select different colours, which each correspond to different sounds. The musical content is based around the pentatonic scale. This means that it is a fairly limited set of rules that governs the output, and the musical possibilities are by such limited. However, this also means that it is easier to create music that makes sense musically, no matter if the user has a musical background or not. It also makes it easier for multiple users to create something that sounds ‘good’ together. This would become more difficult if the musical content was less limited, by for example having the diatonic scale, as every tone of the diatonic scale is not in consonance with every other tone. The pentatonic scale also has a very organic feel to it, and lends it self well to the creation of simple melodies: Just by dragging the mouse up and down in Plink, something musical and melodic will be created.
Tonfall Sequencer is described as “an experimental particle based audio sequencer, created in Flash using Tonfall; the new open source AS3 audio engine” (Windle, 2010). Windle (2010) goes on to explain that Tonfall is designed to get people started with audio programming in Flash. On the website, the user is presented with a screen containing various neurons and receptors. Connecting them will produce a variety of musical pitches. At the bottom of the screen, there are three sliders that lets the user control the number of neurons and receptors on the screen, as well as their proximity, that is, how far from each other the neurons and receptors have to be in order to connect. There can be a total of 5 neurons and 24 receptors. Besides the sliders, there are also two buttons that can be clicked: ‘Wander’ and ‘spectrum’. The ‘wander’ option allows the user to decide whether the neurons and receptors should move or remain static in one place; the ‘spectrum’ option shows a simple spectrum analysis of the sounds created. In the Flash player the used may drag the neurons and receptors around to make them connect with each other, or she may just turn on ‘wander’ and let the music create itself. Sounds in different pitch ranges will be generated depending on the type of neuron the receptors connect to. Windle (2010) claims that the receptors reacts to connection with the neurons by playing a randomly assigned octave, “depending on which neuron causes it to fire”. This is not entirely true: There are three different pitch ranges, so two pairs of neurons have the same pitch range, while one has the third range: Two are in a lower pitch range, two are in a middle pitch range and one is in a high pitch range. Which neurons have the same sound, is not colour coordinated, but the pitch value is related to the size of the receptors. Each time you restart the sequencer, the colour of the neurons and receptors change, so in that way you could say it is random, but the sound outputs will always be based in the same tonality. There are three different pentatonic scales that make up the musical material: The lowest pitched neurons have the pentatonic scale starting from A (A-B-D-E-F#); the middle pitched neurons have the pentatonic scale starting from E (E-F#-A-B-C#), and the highest pitched neuron has the scale starting from B (B-C#-E-F#-G#). When these three pentatonic scales are combined, all the notes of an A major scale is present. This means that it would probably be more accurate to say that the colours of the neurons and receptors are assigned at random, rather than saying the sound is random.
Brandel, J. (2014) Patatap [Online]. Available from: <http://www.patatap.com/> [Accessed 30/11/15]
Brandel, J. (2014) “Patatap” [Online]. Jonobr1. Available from: <http://works.jonobr1.com/Patatap> [Accessed 01/12/15]
Collins, K. (2009) “An introduction to Procedural Music in Video Games”. Contemporary Music Review, Vol. 28, No.1. 5–15.
Dinahmoe Labs (n.d.) Plink [Online]. Dinahmoe Labs. Available from: <http://dinahmoelabs.com/plink> [Accessed 31/11/15]
Eno, B. (1996) “Generative music” [Online]. Motion Magazine. Available from <http://www.inmotionmagazine.com/eno1.html> [Accessed 01/12/15]
Hamilton., R. (2015) “Designing Next-Gen Academic Curricula for Game-Centric Procedural Audio and Music” [Online]. AES 56th International Conference: Audio for Games. London. Available from: <http://www.aes.org/tmpFiles/elib/20151125/17595.pdf> [Accessed: 10/11/15].
Klazien (2014) “Top 5: Interactive Generative Music Sites” [Online]. Submarine Channel. Available from: <http://www.submarinechannel.com/top5s/top-5-of-our-favourite-things-interactive-generative-music-sites-2/> [Accessed 30/11/15].
Windle, J. (2010) Tonfall Sequencer [Online]. Soulwire Art & Technology. Available from: <http://blog.soulwire.co.uk/wp-content/uploads/2010/10/tonfall-sequencer.swf> [Accessed 30/11/15]
Windle, J. (2010) “AS3 Particle Node Sequencer” [Online]. Soulwire Art & Technology. Available from: <http://blog.soulwire.co.uk/laboratory/flash/as3-tonfall-particle-node-sequencer> [Accessed 01/12/15]
Wooller, R., Brown, A. R., Miranda, E., Berry, R. & Diederich, J. (2005) “A framework for comparison of processes in algorithmic music systems” [Online]. Generative arts practice Sydney: Creativity and Cognition Studios Press. 109–124. Available from: <http://cmr.soc.plymouth.ac.uk/publications/gap05.pdf> [Accessed 12/11/15].