Nice job. I thought about building something like this many years ago, but ended up experimenting with music generated from abelian sand pile algorithms instead. I've seen a number of attempts at using genetic algorithms to recombine previous musical patterns.
What's obviously missing is a "fitness function" that can approximate the equivalent of human taste, so the final evolved forms just end up being widely random in terms of quality.
AlgoMotion also did a video explanation for a music based version of Conway's Game of Life last year. Highly recommend their videos.
I've been toying with ideas like this for a long time now. I think the fitness function is critical, but the problem is that taste is subjective, and you need to listen to many riffs/melodies to evolve to a "good" state. Also, you either start with specific melodies, in which case you would skew the results, or start with random noise, in which case it would take a very long time to evolve to anything good. So it seems like you must constrain i tsomehow, such as "only use 12 tones, with full/half/quarter/third notes".
But anyways, my idea for a way to resolve the problem of fitness taking forever would be to livestream it on Twitch, in the same vein as the "Twitch plays Pokemon" where viewers can input commands to vote for an action, they could vote on the fitness of musical tracks.
> What's obviously missing is a "fitness function" that can approximate the equivalent of human taste, so the final evolved forms just end up being widely random in terms of quality.
Honestly for me this is a feature not a bug. If I want to hear music that matches my personal taste exactly I can just go to my instrument and play it. These tools are a way to taste more exotic forms and see if there's anything worth carrying over.
But when we conceptualize something like music in the form of evolutionary computation then it is important to be able to define a good metric for the fitness function otherwise you might as well just take X pieces of music, normalize them to the same key signature/tempo/etc., and then randomly mash them together.
If you're just in the mood for something more exotic, I'm happy to go repeatedly sit on my piano for a few hours and send you the final samples.
There is a MIDI sequencer called ZOA (for Apple devices) that does a very similar job. I had a lot of fun with it, combining it with synthesisers (I have bunch of them but my fave is Moog's) inside AUM.
Oh this is cool. I did something similar with a modded Launchpad by programming GOL on it and converting the positions by column and row to octave and degree and then outputting MIDI to a synth.
Pretty cool! How do you decide what tone to play on birth/death? Is it based on the position in the grid or do you just pick from a simple scale at random?
Sounds lovely, I'd love to hear what it's like when the number of living cells on screen controls the length of the note so it's not just a constant rhythm, even though it is hypnotizing.
do you expect that in a blind trial it could be distinguished from playing a statistically similar number of tones chosen randomly from the available cells?
What's obviously missing is a "fitness function" that can approximate the equivalent of human taste, so the final evolved forms just end up being widely random in terms of quality.
AlgoMotion also did a video explanation for a music based version of Conway's Game of Life last year. Highly recommend their videos.
https://www.youtube.com/watch?v=b2SjVwYNr54
Incidentally if you like musical toys like this - Electroplankton [1] was a fun little game that had a series of almost organic musical instruments.
[1] https://en.wikipedia.org/wiki/Electroplankton