STARZ (2021)

Digital. Generative visual converted to audio data using computer vision library, outputting spatial audio in harmonic series determined by CV coordinate points.

Max MSP & Jitter, CV.Jit computer vision library

This audiovisual installation piece was inspired by the works of Ryoji Ikeda, one of my biggest influences and inspirations in the field of complex, innovative audiovisual technology. Similar to Ikeda, I am a music producer and sound artist first, and visual artist second. As such, the majority of my prior audiovisual works involved creating reactive visuals which reacted to parameters of my audio tracks. So, I decided to invert the process, and instead develop a system of generative visuals which extracted visual data to interpret into audio. I did this under the guidance of Jitter developer and NYU faculty Luke DuBois, who helped greatly with the construction of my system, appropriately also using the Jitter library, in conjunction with Max’s native signal processing tools and the CV.jit computer vision library for Jitter.

Firstly I created a window out of an array of Jitter noise objects, which intuitively created a window of static-esque random visual noise. From here, I used time-based functions to modulate the dimensions of the individual noise elements, creating the effect of certain parts of the display going in and out of focus, or changing resolution dynamically. I was looking to closely emulate Ikeda’s geometric, line-based strobe visuals, but thankfully the more I worked on my project the more profoundly it differed from my reference material, with the almost particle-based system I came up with looking drastically different. After working with shaders to explore certain color scales for the display, I shifted my focus further from Ikeda’s black and white displays to something in full color, employing specific RBG values and shaders to avoid greyscale and present a vibrant system of reds, greens, blues, and whites. This color scheme was heavily influenced by the collection released that spring by Acne Studios in collaboration with artist Ben Quinn. I was particularly transfixed by the women’s short-sleeve t-shirt from that collection, which inspired me to name this piece after its central geometric figure — a star.

With this as the basis for my project, I augmented the display using some shader modules developed by Luke to translate the coordinate system of the display corresponding to red-green bitmap images developed in photoshop. These I similarly modulated as a function of time, giving the effect of having four distinct scenes or landscapes that are possible at any given point. From here, I again invoked Luke’s help to develop a shader to blur the display, in an attempt to mitigate the ‘pixelation’ effect of the work, which I also approached by greatly upscaling the number of noise elements on screen at any given time.

For the audio component, I again wanted to emulate some of Ikeda’s ‘microsound’ compositions. However, it is particularly important to me in my work to create sounds with physical properties. Therefore, I adapted one of the reference Max patches which modulates a white noise oscillator with a series of filters in harmonic sequence. The root frequency of this sequence is derived from the CV.jit library, which finds the coordinate point of the first feature of the display and translates it into a float value. This is heavily influenced by the coordinate system bitmap images, which paired with distinct audio qualities and tempos enhances the immersive sensation of four randomly generating scenes or landscapes, as alluded to above. I put this audio through a compressor, which added more texture and attack transients to the shifting audio tones. I also used some modulation techniques on the panning of the audio signal to simulate spatial audio. I still plan to revisit this work and properly implement a spatial audio system, but it’s been difficult with the limitations of the pandemic to develop and test a system designed to interface with a physical space. I am still looking for opportunities to put my program on display.

 
Generative visual noise system logic

Generative visual noise system logic

Coordinate system logic

Coordinate system logic

Computer vision & audio system logic

Computer vision & audio system logic

Previous
Previous

OUTSIDER ART (2022)

Next
Next

3D Hand Printer (2021)