Audiovisual simulation
Audiovisual simulation of artificial life with emergent behavior and real-time generative music

LIFESWARM is an audiovisual experiment developed by the Artefacto Films research laboratory as part of an aesthetic exploration into artificial life and its cinematographic potential. It starts from the following question: what happens when artificial organisms learn to behave collectively, and how can that behavior generate an autonomous sensory experience? The system sets in motion thousands of digital entities that follow simple rules but produce visual and sound patterns of a complexity that no individual entity could anticipate. The piece functions as an observation field for the type of collective intelligence that cinema has historically attempted to represent through crowds, flocks, or swarms.

The core of the system implements the Boids algorithm formulated by Craig Reynolds in 1986, a model of emergent behavior based on three rules that each agent applies autonomously. The first is separation, which avoids collision with neighboring entities. The second is alignment, which orients each boid toward the average direction of its group. The third is cohesion, which attracts each entity toward the center of its group. To these three rules, the prototype adds two proprietary variables: an individual phase field that introduces organic variations in movement through personalized sinusoidal functions, and an energy level that modulates each entity’s intensity over time. The set is built in TypeScript with React and rendered on canvas, allowing the simulation of up to five thousand simultaneous agents at full frame rate. The system also includes camera-based hand tracking, so physical gestures like opening the palm or pinching repel or attract entities in real-time.

The project’s sound dimension activates real-time audio synthesis that responds directly to the collective state of the simulation. The audio engine uses the Web Audio API to generate procedural music through several selectable synthesis modes: granular, FM, additive, filtered noise, acid, pad, and spectral synthesis. Rhythmic patterns are organized in styles ranging from nearly silent granular textures to polyrhythms with psychedelic cumbia sensibilities. The visual density of the boids affects rhythmic intensity; their degree of alignment triggers melodic patterns; and the position of the cursor or hand controls audio filters and pitch. This relationship between movement and sound turns each state of the simulation into a unique visual score.

From a critical standpoint, LIFESWARM interrogates the limits between the organic and the constructed in the context of digital cinema. The use of differentiated color palettes for groups of boids that attract internally and repel each other raises questions about the visual representation of social dynamics, segregation, and collective behavior.
The experiment also operates as a research tool on audience perception, studying how emergent visual patterns affect emotional response when accompanied by music generated by the system itself.
Application: https://lifeswarm-boids-simulation-231178493219.us-west1.run.app/