a project by Timm Albers
audio-visuelle Artikulation 1, Lorenz Potthast, WS 2020/2021
video documentation, 01:18 min
approximating proximity approaches the topic of nearness / distance using an abstract audio-visual language involving two distinct objects, which are arbitrarily positioned in space. A software controls the positions of the two objects by positioning one object entirely randomly, while the other is restricted to only move a limited distance at a time. This changes their relationship in terms of distance / proximity to each other.
The software picks these positions at random times and generates percussive sounds every time a position is changed.
Relationships cannot be defined only through means of proximity, but involve other non-linear elements: the strength of a bond does not increase proportionally with decreased distance between two objects.
To reflect this, the rhythmic change of the objects positions is disrupted by chaotic distortions. The objects are deformed, which results in them becoming indistinguishable, forming one complex synthesized object instead.
My interest was to explore an abstract audio-visual language and working with a generative approach involving randomness. The project is meant to be presented as an installation and I’d like it to be expanded to be interactive as well.
- Install Atom
- From the Atom package manager, install Veda
- Install Max (paid subscription needed)
- Install Valhalla Supermassive
- In Atom, open the command palette (
CMD+Shift+P) and search for
Veda: Toggle. Press ↩.
- In Atom, run the shader by opening
- In a browser, open localhost:3001
The project was developed over the course of two months. The following shows a rough timeline of the project’s development.
2020-12-18 — first idea
The topic we where given was “Nähe / Distanz”. After doing some sketches my idea was to visualize the topic using abstract forms. Different elements, which change their position and consequently their proximity to each other. My first idea involved dissolving elements when they come close to visualize blending them together.
Questions I had at this point included how to potentially make this interactive (did not happen) or how proximity “sounds”. I planned on implementing this as some sort of generative and sound reactive software and thought about using Max / MSP and Jitter.
2020-12-19 - 2020-12-20 — blending forms in GLSL
I found an overview of forms implemented in GLSL using Ray-Marching on Inigo Quilez' website. The overview also includes blending two forms, which I found really interesting for realizing my idea. I found GLSL interesting anyways and decided to pursue that route for the visualize part.
2020-21-21 — cel shading
I have read about Cel shading and tried a primitive implementation.
2021-01-08 — interaction and audio
I worked on controlling object’s from outside the shader using Max and started working on some linear audio ideas.
The following is the result of experimenting in Ableton Live:
audio + video
2021-01-15 — progress
I made some progress. This is a screen capture of a combination of Max+GLSL for the visuals and some linear arrangements in Ableton Live. The visuals are controlled from some MIDI tracks in Ableton, with the Max-patch acting as a mediator between the MIDI sent from Ableton and the shader.
The screen capture turned out to be really laggy for some reason.
2021-01-20 — generative sound
I wrote a small Max-Patch which generates sound using random rhythms. It uses Valhalla Supermassive for additional room.
2021-01-22 — process and planing of the video shot
I made some progress with the sound and animation and planned the video shot. My intermediate result at this point is documented in the following screen capture:
I uploaded the same video to Vimeo.