Artificial Intelligence And Music: The State Of The Art

0
Advertisement / Publicité

This page is also available in / Cette page est également disponible en: Francais (French)

In 2019, the film composer Lucas Cantor completed Franz Schubert’s Symphony No. 8 (“Unfinished”), dating from 1822, using artificial intelligence, a technology that uses computer algorithms to reproduce cognitive skills. Last year, “Beethoven X –  The AI ​​Project,” piloted by a group of music historians, musicologists, composers and computer scientists, created Beethoven’s Symphony No. 10 from fragmentary sketches left by the composer, which had been partially assembled by Barry Cooper in 1988. youtu.be/RESb0QVkLcM

Developments in artificial intelligence in recent years have greatly advanced the practice of musical composition by artificial intelligence. The fledgling company Aiva Technologies in 2019 introduced the first platform for musical composition by AI in a version for beta testers that was destined to evolve rapidly.

 

AI applied to music

In 1956, Americans Lejaren Hiller and Leonard Isaacson launched ­Illiac Suite, which is considered the first musical composition ­produced by a machine. Inspired by Bach’s works, this project ­revolutionized our way of conceiving musical creativity.

The beginnings of digital technology can be located in the decade 1970-1980. Digital technology, which superseded electronic automation, gave way to a fusion of technologies, gradually blurring the lines between the physical, the biological and the virtual. We can now see artists cloned as digital avatars that generate entire pieces by following the sounds and procedures of artists and analyzing the peculiarities of their works.

Under the aegis of mimicry, the artificial intelligence company ­OpenAI (founded in 2015 by Elon Musk and other investors) in 2020 introduced Jukebox, a system that generates music with voice in raw audio mode, mimicking genres and musical styles of different artists. “We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes,” claims the presentation. “We can condition on artist and genre to steer the ­musical and vocal style, and on unaligned lyrics to make the singing more controllable.” A link leads us to samples where we can see the familiar but quirky sounds that result. Thus a parallel musical ­dimension opens its doors.

 

Beyond mimicry

Some artists are seeking to use this technology to reinvent the way new music is produced. Björk created Kórsafn for the opening of the Sister City hotel in New York in 2020. The basic material for this project ­consisted of Björk’s choral archives and AI-generated music developed by Microsoft. The ever-fluctuating composition changed according to the surrounding elements and the weather as captured by a rooftop camera, which transmitted the results through the hallways of the hotel.

In the same vein, Jean-Michel Jarre, a pioneer of electro music, ­recently launched Eōn, an application that generates nonstop music. Last June, the Franco-Greek composer-performer Alexandros Markeas furthered the experiment as part of the ManiFest festival in Paris by ­exploiting the possibilities offered by the machine’s ability to make choices in response to the sounds of the musician and then follow ­audience expectations – a way to test the musical possibilities that arise in dialogue with AI.

Le Vivier attempted a similar experiment in early October in ­Montreal with Résonance croisée, which Arnaud G. Veydarier wrote about in his article for the September issue. youtu.be/YtHzNVwy5t8

If artists and researchers use artificial intelligence to create, innovate and explore, industry figures see it rather as an opportunity to save time and money. Snafu Records is already using algorithms to ­determine which aspiring stars have the greatest potential in terms of popularity and sales.

 

Copyright implications

Damien Riehl (lawyer, musician and coder) and Noah Rubin ­(programmer and musician) recently created, with their All the Music project, an algorithm allowing the identification of all possible melodies. The entrepreneurs illustrate the principle with a grid ­showing all possible melodies – those already taken in red, those that have not yet found a taker in green.

Their premise is that the possible combinations of the eight notes of the C Major scale are limited (and, by extension, all keyboard notes and their modifications). Their system shows that the inventory of melodies is not endless and that there are fewer and fewer every day. The possibilities for an artist to be accused of plagiarism therefore increases over time.

Exit the cliché of the artist creating from a blank page. Rather, it is “a melodic minefield” where composing always comes with the risk of stepping on a square already taken without knowing it.

By placing all possible melodies (more than 200 billion) in the ­public domain, the project aims to aid the pursuit and defence of copyright ­infringement lawsuits. youtu.be/sJtm0MoOgiU

 

Status of the artist

Is art advancing on the path of the democratization of creativity? Is the artist’s prestigious status destined to be deconstructed and rethought? The emphasis on the artist’s humanity can give rise in ­listeners to an impression of a falsehood. It remains to be seen how perceptions will adapt to the increasingly common use of ersatz ­digital procedures. By then, the next wave of AI will be under way. You can ­expect us to report in a future issue on the ways in which microchips implanted in the brain can improve music learning and amplify the abilities of musicians.

This page is also available in / Cette page est également disponible en: Francais (French)

Share:

About Author

Leave A Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.