Art in the Age of Machine Learning by Sofian Audry

Art in the Age of Machine Learning by Sofian Audry

Author:Sofian Audry
Language: eng
Format: epub
Tags: Machine learning; Deep learning; Artificial neural networks; Artificial intelligence; Connectionism; Digital art; New media art; Evolutionary art; Generative art; Robotic art; Artificial life; Creative AI; Computational creativity; Behavior aesthetics
Publisher: MIT Press


Music and Connectionism

From the mid-1980s to the mid-1990s, connectionist approaches to computation experienced a certain revival. During this era the field of music was undoubtedly the most important place in the artistic realm for progress in the application of neural networks, especially in the analysis and generation of scores.4

Connectionist approaches to music composition followed previous work that made use of mathematical processes called Markov chains. Markov chains are probabilistic systems that model transitions between states according to fixed probabilities. These transition probabilities can be custom-crafted or learned from a corpus of existing partitions. One important property of Markov chains is that they represent transitions between sequences of events using a limited window of previous events. For example, if the note most recently played was a C, a Markov chain could give a 25 percent chance to play a C again and a 75 percent chance to play an E. Thus in Markov chains only a limited number of events from the recent past affect the future.

Composer Iannis Xenakis used Markov chains in 1959 for his Analogique A and Analogique B pieces as well as his composition for eighteen strings Syrmos. He explained his approach in his revolutionary book Formalized Music, describing the transitions from one note to the next in tables that he called screens, each of which configures the generative process used in a specific region of musical space (Xenakis, 1992).

Many other composers have used Markov processes in their work as well. Two notable examples are found in commercial music software released in 1987: Jam Factory by David Zicarelli and M by Zicarelli, Joel Chadabe, John Offenhartz, and Anthony Widdoff. An important characteristic of these programs was their ability to generate transition probabilities from a corpus of musical scores provided by the user. Another key feature of these programs was their ability to grasp relationships between notes using a time window of up to four notes. In other words, while Xenakis and many others were using first-order chains in which only the last note influenced the next one, Jam Factory and M used fourth-order chains that allowed selection of the next note on the basis of the four previous ones. Although in theory one might make the assumption that fourth-order chains would lead to richer and less random musical compositions, in reality increasing the time window tended to make the software directly copy entire sequences of notes from the corpus, thus reducing originality (Ames, 1989; Baffioni et al., 1981).

This drawback of using larger reference windows is a commonly known problem with Markov models trained on existing corpora of text or music. Markov chains often fail to grasp long-term structural dependencies between events. For example, a Markov chain trained on a database of blues music might be able to generate one or two measures but would have a hard time creating a coherent score with an appropriate beginning, middle, and ending. Because Markov chains use only local or given representations of events already existing in the material they are trained



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.