Training a neural network to sing in an interactive public ceremony held at Martin Gropius Bau, Berlin
Spawn Training Ceremony: Deep Belief March 2019 Performance, Multichannel A/V, TensorFlow
In Deep Belief, a cast of characters led audience members through a series of sung and spoken exercises to train Holly and Mat’s singing neural network, Spawn.
In 2016, Holly and Mat began development of their “AI Baby” project, Spawn, in collaboration with machine learning developer Jules LaPlace. Training a singing neural network on Holly and her friends' voices, their goal was to welcome an inhuman collaborator to their growing human vocal ensemble in advance of recording a new album.
Spawn was not ready to contribute on stage by the time it came to tour. But ensemble members led each audience in a call and response song from stage to gather more vocal samples upon which to train Spawn, as seen in this clip from the Barbican in London. To date, over 50,000 voices have contributed to training the network.
Spawn is present as a vocalist throughout Holly’s album PROTO (4AD, 2019). Musical passages were interpreted by the AI just as they might have been by a human vocalist. Rather than viewing AI as displacing a human composer, this approach frames AI as a collaborator capable of learning from, surprising, and augmenting the capabilities of the musicians working alongside it.
Holly is renowned for building tools to experiment with the voice, and views AI as an unprecedented opportunity to expand her vocal range and capabilities. She also researched the intellectual property implications inherent of machine learning models during her doctoral studies at Stanford University’s CCRMA (Center for Computer Research in Music & Acoustics).
Holly operates at the edge of electronic and avant-garde pop music and emerges with a dynamic and disruptive canon of her own. On her most recent full-length album, PROTO, Herndon fronts and conducts an electronic pop choir comprised of both human and AI. voices. The sounds synthesized on PROTO by Herndon, her “AI Baby” Spawn, and the vocal ensemble combine elements from Herndon’s dynamic and idiosyncratic personal journey. These include the timeless folk traditions of her childhood experiences in church-going East Tennessee, the avant-garde music she explored while at Mills College, and the radical club culture of Berlin. They’re all enhanced by her recent PhD composition studies at Stanford University, researching machine learning and music. Herndon co-founded the podcast series Interdependence alongside Mat Dryhurst.
Twitter | Instagram | Website
Mathew Dryhurst is an artist and researcher based in Berlin. He makes music and creates art with Holly Herndon, and their albums PROTO and Platform (4AD) have provoked international critical acclaim. He teaches at NYU’s Clive Davis Institute of Recorded Music, Strelka Institute, and the European Graduate School and previously served as Director of Programming at Gray Area in San Francisco. Dryhurst co-founded the podcast series Interdependence alongside Holly Herndon.
Twitter | Instagram
Dataset processing isn't new to musicians; entire musical genres are based solely on manipulating samples. Today, musical production techniques have become so sophisticated that listeners only perceive musical textures. But behind the scenes, the methods of making those textures are changing dramatically. This panel will feature creators and users of electronic instruments that rely on deep learning for their performance—whether to create entirely new sounds and music from existing recordings or to give the music playing a human form.