Cloud Music No. 13

Michael Gogins
October 2022

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

This is an online piece of electroacoustic music, rendered in your Web browser using high-resolution audio. It will play indefinitely, never ending, always changing.

The notes are played by a Csound orchestra that is embedded in this Web page using my WebAssembly build of Csound. This in turn includes my CsoundAC library for algorithmic composition, used in this piece to generate randomly selected but (I hope) musically sensible chord progressions and modulations that are applied to the generated notes.

The music is generated by sampling the bottom row of pixels from the moving image, downsampling that row into fewer pixels, and translating those pixels into musical notes from left (lowest) to right (highest). Hue is mapped to instrument, saturation is mapped to duration, and value is mapped to loudness. Generally speaking, when a bright ring moves to the bottom of the the display, you should hear some notes generated by that event.

The viewer may exercise a certain amount of control over the piece by using the mouse, or by opening the Csound controls.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en)

Feel free to use this piece as a template for creating new pieces of this type... as long as it doesn't sound too much like this one!

Please report any problems you have playing this piece, or any ideas for enhancements, at cloud-music issues.

Credits

I created the visuals for this piece by adapting Kishimisu's Inside the System, which has an open-source license compatible with the license of this piece.

My code in CsoundAC for working with chords, scales, and voice-leading implements basic ideas from Dmitri Tymoczko's work in music theory.

Code for compiling and controlling shaders is adapted from ShaderToy.com.

The algorithm for downsampling the video canvas is from Sveinn Steinarsson's MS thesis with code from https://github.com/pingec/downsample-lttb.

Csound instruments are adapted from Steven Yi (YiString and FMWaterBell), Joseph T. Kung (Kung2 and Kung4), Lee Zakian (ZakianFlute), and others.