The journey to Apeiron started just a couple of months ago with a wild idea - why don't I create my own dream synthesizer? So, I started to brainstorm and prototype. Now, Apeiron is a living thing, although in a very rough prototype form. I hope to be ready for a first demo release in about 12 months time but will report the progress here until then.
I have a very broad (some would say ecletic) taste in music and have dabbled in composition for a couple of decaades off and on. Lately, I have mostly been working on some pieces that could be desribed as dark ambient, although I really don't like labeling music. The direction I have been pursuing for the most part is exploring how sounds evolve and interact on a long timescale, building an atmosphere rather than tonal compositions as such.
This was my inspiration for creating my own synthesizer. The instrument would have to be something that allows experimentation and hopefully some unique sounds. The instrument is now taking shape in a very rough first demo form. Apeiron will not be an instrument to be played and will definitely not be tonal. The idea is that an Apeiron patch is a loose script for a performance, which the composer/performer can influence in real time but never fully control. Gestures will be used to nudge the sound generation but each performance will be unique.
Today is a milestone for the project as the first Apeiron based, Genesis One, was just released on Soundcloud. This is a sort of darksynth composition intended to demonstrate the first sounds created by Apeiron. All sounds were created by Apeiron and rendered as wav stems. The stems were imported into Logic Pro and arranged into the composition. Volume and pan automation was done in Logic, but the only additional processing was a bit of Audiothing Things Fold on the low drone and a some Audiothing FOG for ambience. All sounds were created by Apeiron.
Several of Apeiron's sound generators were used in this compositon. I decided to begin by exploring how to make sound from natural phenomena - the music of nature so to speak. The three main generators used in Genesys One are the following:
SeismicOscillator turns an earthquake catalog into a rhythm of spectral grains. Events are collected from the Icelandic Meterological Office into a data file that is loaded into the SeismicOscillator. Each datapoint triggers a brief burst of energy in the frequency domain. The depth of the event determines where in the spectrum that energy sits - shallow volcanic seismicity produces bright, high-frequency cracks, while deep ones produce low, diffuse rumbles. Magnitude controls duration and amplitude. The Genesis demo used data from the 2021 Fagradalsfjall eruption sequence.
DataOscillator began as an experiment in using weather information as a sound source. The inspiration is one of the recent weather warnings - we Icelanders are generally preoccupied with weatther so why not use it as a sound source. This was generalized into the DataOscillator that can load any csv data series and use it to make music. In the case of Genesis, a two week data was used, culminating in an orange weather warning. The wind speed data drives a harmonic oscillator where calm conditions produce a near-pure tone at the fundamental and storm peaks spread energy across the full harmonic series.
The AuroraOscillator uses the aurora chorus — the electromagnetic radiation produced by electrons spiralling along Earth's magnetic field lines between 300 Hz and 30 kHz. This overlaps with the audible range and requires no frequency transposition. The module plays the VLF recording directly in spectral mode, routing it through the synthesis pipeline as an analysis frame. The aurora signal feeds the frequency-domain resulting in rising sweeps that resemble vaguely bird tweets and a broadband hiss that varies with geomagnetic activity. The data to drive the AuroraOscillator was collected from an API published by the NASA Space Physics Data Facility (SPDF)
The next steps in the Apeiron journey are to continue exploration of using natural phenomena, in particular high energy events, to synthesize and modulate sounds. I also plan on exploring fully synthetic sound generation, hopefully generating some unique textures.
— Kristján Valur Jónsson, Reykjavík, April 9th, 2026