I worked as sound designer, and developer at Melodrive, being in charge of designing synths, presets, mixing and helping with setting up musical styles in python.

This page is to show my different areas of work at Melodrive, a startup making an deep adaptive AI music, that adapts to mood input and generate music in different styles. At the moment Melodrive have 4 released styles, with other in development, Rock, House, Ambient, and Chiptune. Everything is generated in real time, including sounds, beeing played through either a wavetable synth I created in Pure data, compiled with ehavy compiler, and sample instruments.

This has been the main core problem of the sound quality since its very consterained to only use pure data for synthesis, and very CPU heavy. The melodrive instruments are partly sample instruments, where I cut samples from needed instruments.

You can listen to how Melodrive sounds at the moment.

Below is the main voice of the Wavetable synthesizer I developed in Pure data.
It contain 64 morphable wavetables with bins of 2048 samples, taken from Native Instruments Massive. The first 16 are displayed below.

Presets contain low and high range for selected parameters to be able to morph between parameters, to respond to Melodrives emotional changes. 
This makes it possible to morph i.e. filter and wavetable position to convey more excited feeling, or introduce distortion and detune for a less happy sound. Apart from the real time generation of music, the emotional changes is what makes Melodrive unique, and each instrument has to reflect to change happening in score.


As part of showcasing Melodrive, we developed VR demos, to show the real time and interactive capabilities of the Melodrive Music Engine. Below are early examples of the Melodrive engine, and our VR demos.