Machine Learning Performance
Sounds, images and biometric data of birds are used to recreate their sound and behaviour. Through interpolation between different species and glitches in the learning algorithm, the sounds start out very natural and become more and more sorted and structured like in modern minimal or electronic music, while the images become more and more abstract and generalised. Similar to my other work, each performance is very different from the previous one. 3 different neural networks interact and are used to recreate the sound using Supercollider and the image using Open Frameworks. Everything happens in real time.
Chirp was first shown at LPM 2018 (Rome).