Visual Sound Fusion is an audiovisual collaboration with Maya Chen, merging live performance with generative visuals to create immersive multisensory experiences. The project exists in two forms: as a permanent installation at ArtScience Museum Singapore and as a live performance piece that has toured internationally.
The work explores the deep connection between sound and vision, using machine learning algorithms that allow the audio and visual elements to influence each other in real-time. J3ZZ’s sound compositions generate parameters that control Maya Chen’s visual systems, while the visual patterns feed back into the audio engine, creating a closed loop of audiovisual generation.
In the installation format, visitors can interact with the system through motion sensors, becoming part of the generative process. In live performance format, both artists manipulate their respective systems on stage, creating improvised audiovisual compositions that emerge from the interplay between sound and image. The result is a truly synesthetic experience where it becomes impossible to separate what you hear from what you see.