A Walk to Meryton

My latest system, which has since been used to generate two additional series of works. Full description below.

Coding began in early spring 2021, first generations appeared in July 2021; video generation was added between April-June 2022. Recording of live musicians May-September 2022, mixing and mastering (by Murat Çolak) October-March 2023. Double vinyl release (by RedShift Records) March 2024, available on Bandcamp as well.

In the summer of 2022, I collaborated with four amazing individuals, to overlay a human reaction to the generated parts in A Walk To Meryton: John Korsrud, trumpet and flugelhorn; Meredith Bates, violin; Jon Bentley, soprano and tenor saxophones; and Barbara Adler, text and reading.

John, Meredith, and Jon were given the harmonic progressions and melodies for each composition, with suggestions as to which sections they could improvise within.

Barbara and I had long conversations about walking, Jane Austen, musebots, and internal dialogs. Barbara then added her own take on these ideas, and provided readings.

Record release party, March 1 2024; Goldcorp Centre for the Arts.

The Double Vinyl Record ; Art Direction by Brady Cranfield

The reviews of the album are starting to come in:

No doubt the relative merits of AI and its ramifications for art will continue to be debated. But if evidence is needed for the ways in which AI can function as a benign element enhancing creativity, A Walk to Meryton surely can provide it.” Daniel Barbiero, Avant Music News

"As a co-author of music at the intersection of contemporary music, jazz, spoken word and electronic ambient, Eigenfeldt presents "MuseBots", as he named his electronic "musical robots". However, there is a human element in the recording. And the electronics generated by the machine match the inputs of jazz trumpeter and composer John Korsrud or saxophonist Jon Bentley (Seamus Blake, Kenny Wheeler) without any problems." Tomáš S. Polívka, Czech Radio

The ten movements, below:

Project Description

Building upon my previous generative systems, such as Moments, the approach is much more compositional than improvisational: high level decisions are made by a ProducerBot, and playerBots fulfill specific roles, doing what they can and know how to do. Furthermore, there is significantly more editing by the musebots of their material: they write their parts into a collective score, which other musebots can access and in turn use that data in order inform their own decisions, making second passes at their parts

A ProducerBot generates a complete framework – including a plan for when specific musebots should play – and a chord progression (based upon a much fuller corpus than previously used). This produces a “lead sheet”, which can be interpreted multiple times by the playerBots.

Individual musebots (playerBots) generate their own parts, and select their own synths. Only two high-level controls are available for generation: valence (pleasantness) and arousal (activity).

Musebots choose their own timbres, using a database of possible patches from a multitude of synths. The only hand editing after generation is some volume adjustment between parts in Ableton.

The title of the entire series – A Walk To Meryton – as well as those of individual movements, is generated, by a bot, based upon the text of Jane Austen’s Pride and Prejudice using a second order Markov chain.

Videos are also generative; given the generated audio and score, video bots select five images – one for each section within the music – from a database of photographs taken by myself from recent walks in nature. The images are slowly panned and sent through video processes that are sensitive to movement.

Finally, the system produces leadsheets which display the overall form, harmonic progression, and melodies; this allows musicians to improvise over the generative music. This also allows the system to load these scores and regenerate parts for new performances, similar to how jazz musicians continually reinterpret leadsheets (more examples and explanation below).

Two early generations (with placeholder videos) from July 2021, after four months work on the system.

A further novelty to this system is that generated frameworks (created by the ProducerBot and provided to the playerBots) and scores (generated by the playerBots) are saved, making it possible to translate these into human readable scores. The goal is to eventually provide human musicians with such scores, allowing them to improvise to the musebot’s generated material. One additional bonus of this process is that the frameworks function like lead sheets, and the musebot score like a single performance; it is completely possible to create new musebot performances from the same structures, much like an ensemble of jazz musicians create different interpretations of the same lead sheet (i.e. tunes).

An example of a framework (generated July 20 2021), with two different realisations by the musebots.