Projection Mapping Case Study - MAPP Festival
Interference Waves is a 1 minute experimental audiovisual animation created for the Minute MAPP Monreal projection mapping showcase, part of the annual MAPP festival in Montreal, screened on 26 Sept 2024.
You can watch the full render at 60fps along with recordings from the night here:
I am honoured to have been shortlisted and grateful to have my work shown among so many other talented artists.
Since this was a personal project for me with no client, I treated it as a challenge to get outside of my comfort zone and use real time/experimental tools like TOOLL3 for all the animation and rendering. I used Houdini to create the 3D template and assets/data, the music is an extract from a longer performance I recorded earlier this year using VCV Rack.
You can check out the rest of the entries and also vote (by leaving heart/thumbs up emoji) via this facebook link until 11th November: https://bit.ly/vote-minute-mapp-2024
Project Breakdown
Ideation and Storyboarding
The first step was to figure out the music, I knew the process would be much more convoluted if I came up with a visual idea first and had to create music after, and I already had so much personal music I had created just lying around. I figured if I picked something I had already made, it would be much more straight forward to work out the visual elements and have fun trying to make it all fit together. I had to find something exactly 1 minute long, I remembered one track I made earlier in the year for weekly beats.
The recording was about 5 mins long and had a lot going on, but I managed to pick my favourite part that worked well enough in a 1 minute window with events I could see fitting together visually.
I wanted the whole experience to be a personal journey of technical exploration, I had been messing about with T3 recently and there was one setup I knew for sure I could use a visual remix of for one of the sections. It was good to have that part basically sorted so I could spend more time on the 3D template and final elements which I hadn't planned out yet.
I knew I wanted other feedback systems and effects, I was thinking about ways to get a sense of depth from such a flat surface, but I also thought it could be good to start out with more of a symbolic 2D style then step it up to higher levels of depth. I didn't create a storyboard, but I wasn't completely freestyling either, I knew the piece would split into about 4 sections with distinct styles.
HOUDINI - MAPPING and Animation Data Preparation
The template and surface for this project are not as complicated as what I'm used to working on, so I decided to take that as a challenge to model the 3D template entirely in houdini. Its wasn't the prettiest node graph but it did allow me to embed and format the geometry in a way that made it easier to explore and break it up for different effects later in the process.
I tried to split up the geometry in a semantic way that would make sense for later segmentation and manipulation, avoiding triangles, keeping everything mainly aligned and creating groups for the main sections.
Next I needed to figure out my pipeline for lines for highlighting animating the edge features of the building, I generated some lines manually by plotting paths to animate along the geometry, as well as generating some different abstract branching paths and general horizontal/vertical paths.
Next I created activation maps for different effects I would trigger and use to drive systems in TOOLL, the idea is to encode float values like gradients / normalized element ID maps either to an image that conforms to the template, or to the UV map of the geometry for exporting.
Thanks to the new Copernicus context in Houdini 20.5 I was able to easily grab any geometry and get a render of any custom attribute as an image that I can import to TOOLL. This would have been way more convoluted to do without Copernicus.
For example, below is a normalized ID map of all the chunks of the template geometry saved as a greyscale image:
I knew I also wanted to slide from left to right along the entire surface while highlighting different elements when using the trippy feedback system, so I did a custom mix of nodes in copernicus combining the element IDs with worley noise and a posterized gradient. I also had a preview setup to check how animation setup in copernicus just to check how it could look/animate before exporting the map.
领英推荐
For the final shot, I knew I wanted to have a big pair of eyes staring back at us through the space. There was a specific model I had in mind to use for this shot, bigman02, it comes with the c4d content library and I've used it for all my personal experiments and learning adventures, for over 20 years we've been through everything and it only felt fitting to bring him along for this ride.
The data for the head was saved out in separate chunks; Face primitives, Small tubes along the edges, primitives copied to Vertex points and two platonics for the iris'
Each geometry has its unique ID (normalized to 0 - 1 range) stored in the U and its normalized distance to the iris stored in the V. This data can directly be used with texture animation fx effects in TOOLL.
I exported all the geo and image maps I needed, sometimes going back and forth between T3 and houdini for tweaks, it was very easy and straightforward to swap out and update element references.
TOOLL3 - Animation, Real Time FX and Rendering
T3 has an excellent spectrum analysis and band isolation node workflow for audio reactive control. However I wanted to make something really tight, automatic frequency analysis stuff is good for live gigs and dj sets etc. But since this was going to be a passive pre rendered output on delivery, I decided to go with completely manual keyframing of all the events to get more creative control and do some more custom and personally expressive forms of automation.
In the past when I've worked on rock shows, the manual keyframing is almost always the most important step, and how you structure that animation and link it up to different elements is a pretty strategic and interesting puzzle to solve.
In T3 you can keyframe any parameter you want, but it doesn't make much sense to do it like that, the main automation will need to be referenced in multiple places on multiple effects. I created a few main float value tracks for referencing across the project. I would lay down the keyframes in 0-1 range for all the main events, trying to represent energy levels in the track, specific bass hits, and other notable moments.
Something I was thinking about was to export CV control data driving the synth parameters and automation in the original VCV Rack modular patch, something I've done in past projects, but the track itself is generative and recorded from a performance, so it wouldn't have been the same if I did a new recording, I wanted to go with this recording specifically.
I found having the spectrogram display in the timeline really helpful too.
I encapsulated certain effects with breakouts for connecting and parameter control from the upper network level.
To switch between scenes/node groups I animate the index of a pick texture node.
To crossfade I use BlendImages node. This was the main method to progress between shots.
Below you can see the process of cycling through the encoded attributes in the UV maps I created in Houdini. Like I was saying, it's not a traditional uv map, but more about using the UV as a container for arbitrary data, I could have (and have in the past) just as easily used vertex maps or even normal attribures. With Houdini there's no limit. I think Maxime Faure was the first person to show me this hack, back when we were doing some crazy stuff in Unreal Engine for AT&T project with Moment Factory. Although the actual animation technique is the bread and butter of most mapping projects, offsetting a profile along a gradient map.
The gradient offset is progressed with a counter each time a kick event occurs, the value is remapped and either raw or smoothed with the damp node.
The final output was rendered straight to MP4 out of T3 at a high bitrate along with audio, although I did later remux the audio to make sure it was my uncompressed source, since audio encoding is sort of experimental at the moment in T3.
Conclusion
Overall the main setup wasn't too complicated or time consuming, I spent most of my time tweaking and experimenting with different effects. The goal was as much to have fun as it was to make something functional with re-use value.
I loved working with the timeline in T3, it is limited but very responsive and functional with all the hotkeys I needed. I didn't expect it to be so straightforward considering T3 still has a huge feature roadmap ahead of itself, getting real time feedback and scrubbing through the timeline effortlessly made the whole workflow painless.
I already have some new ideas for my next project and would like to try some more ad-hoc real time stuff. Also I'm not planning to completely limit myself to T3, I do plan to spend time messing about with Touch Designer and Notch, I just love how open and unrestricted T3 feels while also having a great interface for creative expression.
My favourite part of the creative process is coming up with ideas to make something interesting in houdini to bring into a real time context, it gives a unique edge over staying specifically inside the real time software where you can end up being biased towards familiar styles.
IT Leader & AI Enthusiast | Empowering People with Technology | Facilitating AI Adoption
1 周This is amazing !
houdini unreal 3d/fx/td - virtual production - motion design - creative tech
2 周So cool to see the process, I loved your capsule already but now even more . Thanks for documenting all this process !!
Artiste 3D senior chez CAE Healthcare
1 个月Congrats Nik! Very cool work, I love it !