Basking in Moments with Arne Eigenfeldt

January 23, 2017

By Ian Bryce

If computers wrote music, what kind of music would they create and would it be good?

These are questions Simon Fraser University School of Contemporary Arts professor Arne Eigenfeldt explores through his research in computer-generated music.

Over the past 10 years, Eigenfeldt has researched and developed intelligent musical programs that collaborate and play music together. His latest research has led him to create musebots—independent, intelligent, musical programs—that create entire musical compositions in mere seconds.

As part of the SCA Faculty Series, Eigenfeldt is exhibiting his musebots in Moments—an installation at SFU’s Goldcorp Centre for the Arts running from Jan. 18 to 29 where the artificial intelligence programs generate 10-minute, original compositions.

We spoke with Eigenfeldt about Moments and his musebots:

You teach electro-acoustic music at the SCA, how do you see computers in relation to music?

I see computers as musicians rather than instruments, and I expect more than what I input into them.

Coding musebots allows me to explore composition and hear the different ideas in my head. The musebots allow me to create music where I didn’t make all the decisions but I may have.

Has there been a moment where you’ve been surprised by the composition?

In my research, I’ve created systems that are consistent at creating good music but they don’t surprise me. One of the tricks in generative music is how to balance the notion of surprise and expectation.

Part of my algorithm uses learning where, rather than coding musical rules directly, I’ve provided musebots with music and the algorithm tries to figure out the rules. What they learn by this method has had some interesting results.

This summer, I was presenting the musebots at the New Interfaces for Musical Expression conference in Australia. I had two mechanical pianos set up to do a live performance for 10 minutes.

During the performance, the musebots decided to see how quietly they could play. On stage, you could see the keys moving up and down but no one could hear anything. Then, the musebots started influencing each other but, instead of getting louder, they became even quieter.

So surprise can be good and surprise can be bad.

How do you know what needs to be debugged and what is intentional?

Intentionality is very difficult to measure. This is even more problematic because I don't want to restrict the agents unduly. It is an incredibly complex system that is hard to predict, so I listen to it a lot, and take notes, and attempt to find that ‘sweet spot’ where the musebots might make good music. Then I try to gently nudge them towards that in the code.

It’s the equivalent of a musician rehearsing, but for computers.


Moments runs from Jan. 18 to 29 in the lobby of Simon Fraser University’s Goldcorp Centre for the Arts.