Interview with Tim Rolls
GH: We’re so curious to find out: who are you, where are you from, and what do you do?
Tim: Hi, I’m Tim Rolls from Vancouver in Canada. I’m a hybrid designer that speaks the language of design and tech, and moves fluidly between them.
Currently focused on taming wild technology in XR, Machine Learning, and Physical Computing to build real-time, interactive, generative and immersive experiences, on and off screen.
By day I direct development of the Interaction Design Lab at Emily Carr University, building a space where design meets creative code, electronic prototyping, XR and digital fabrication, where the interplay between those disciplines opens new avenues for exploration.
In a field that’s changing on a daily basis, I stay committed to learning new things every day, making and posting creative code experiments every day for all of 2020.
"In general, the thing that draws me to generative design is that it’s using computers for something they are uniquely suited for."
GH: What is the underlying philosophy behind your work?
Tim: Right now, my focus is just to learn and try as many things as possible. Coming from a visual art and design background, describing behaviors or shapes in numbers doesn’t always come natural to me. Of course, the more I learn, the more I realize there is to learn.
In general, the thing that draws me to generative design is that it’s using computers for something they are uniquely suited for. Very fast and complex number crunching, storing large amounts of information and recalling it in a perfect state, connecting to people and data all over the world, all things that humans can’t do naturally.
We spend a lot of time using digital creative tools that are essentially recreations of analog processes. Take Photoshop for example. Most of its functions existed in dark rooms even predating personal computers, but the software just made it easier/faster. Generative design, on the other hand, puts some control in the hands of the software, and enables us to create things with speed and complexity unique to the medium. I also really enjoy the process of discovery that comes with that. By relinquishing some control, you can spend more time exploring these systems to tease out emergent forms and behavior. That process feels more like a collaboration than working with a simple tool, and I’m constantly surprised by unexpected results.
GH: Can you pick an artwork, dissect it for us and describe your workflow?
Tim: This piece came from experimentation with a curl noise compute shader setup I learned from @threedashes___ , and is a great example of how the feedback loop in working with generative systems can lead to unexpected and surprising results. “Happy accidents”.
Curl noise is essentially a vector field that takes a vector field as input, and the result is a divergence-free vector field. Check out Peter Werner’s Blog for a more complete description as well as 2D and 3D implementations. In this case I was really mesmerized by the natural, flocking-like behavior this system produced when applied to particle motion, so I spent a week iterating on it each day. Adding new parameters, real-time audio analysis to drive the animations, and eventually Leap Motion gesture controls to physically interact with it.
Based on my previous day’s work, trying colors and effects, I noticed it started to feel like a school of fish. So I pushed that idea a bit. Our little abstract fish would probably breathe, so I added bubble particles emitting from the larger swarm periodically.
What does underwater lighting look like? Mostly dark, with some light from the surface. I implemented a technique for real-time volumetric light/ god rays by adding bloom to the brightest parts of the scene, separating that bloom and applying a radial blur over a huge distance. That stretches the glow out into a line, producing a convincing approximation of light beams, at a fraction of the performance cost of actual volumetric lighting.
From there I adjusted the overall colors and lighting into blues and greens to really push the concept of some abstract fish dancing to the music. I also added some depth of field, by using a depth map as a mask for a blur. The last step was HDRI lighting, which essentially wraps the scene in an image, and casts light from the bright parts. This is useful for simulating reflection, because often virtual 3d objects don’t have other objects around them to visually reflect. It’s hard to overstate how much this kind of postprocessing can add to the overall look and feel.
The audio is another layer of experimentation, because every system will behave differently based on the audio data fed into it, and how the system interprets it. Audio driven posts are often the most time consuming for me, as once I’ve got the system to a place I like visually, I’ll often spend a couple hours trying different audio sources/ songs and iterating on how audio parameters are used in the system. In this case, the little turn the swarm does after a pause in the song was another happy accident, and one that I had to run with when it happened. Sometimes even in well-designed systems, with great and dynamic audio, it can be hard to get the two to play along in a way that feels right so I’m always thrilled when everything comes together and the audio and visual components match so well.
This piece, as well as all my Touchdesigner based daily experiments, are on Github as a learning resource: https://github.com/timrolls/TD-Everydays
GH: What’s been inspiring you lately?
Tim: Well given the current state of the world in the pandemic, I’ve actually been finding it hard to stay inspired. Even as an introvert, I draw from experiences for inspiration, so staying in my apartment most of the time isn’t feeding that fire.
Starting into my daily practice, the Touchdesigner community was really key for inspiration. The community is very active with a steady stream of artwork, tutorials and resources. Bileam Tschepe (@elekktronaut) and Simon Alexander-Adams (@polyhop) in particular make some great tutorials that are short, explore techniques that can be applied in a wide range of scenarios, and generally have a great aesthetic as well. Speaking more generally, I owe a lot to Joshua Davis (@praystation), who not only creates tons of inspiring abstract generative work, but also teaches Processing. His classes were where I started with generative design and integrating coding into my design practice.
Going back a bit farther, I’ve always drawn huge inspiration from Justin Maller (@justinmaller) and everything to do with his collective Depthcore (depthcore.com). Early in my design career, I spent a lot of time emulating their abstract 3D/ photo manipulation aesthetic. They had this way of combining complex abstract 3d rendered forms into natural environments and textures that made the work simultaneously natural and alien. I use Photoshop a lot less these days, but juxtaposition of natural and synthetic elements into compositions is something I still find very satisfying.
GH: Tell us about your setup. Where do you create? What tools do you use?
Tim: For the last couple years, the core of my setup has revolved around Touchdesigner. This node-based visual programming tool allows for output to be generated in real-time, tightening the iteration loop and helping to integrate experimentation in a way not possible in other environments. Having to wait for a compile or a render just to see the results of what you’re doing can sometimes be enough to derail a thought process, and that’s something I struggled with creatively in more traditional tools.
Touchdesigner is also somewhat unique in that data can quickly be represented as most any other kind of data. For example, a grid of colors can store the positions of particles in 3D space, which opens up possibilities like applying image processing to those positions to alter particle behavior, or image feedback to create a smoothing effect. That process opens up some novel ways to approach solving problems that could be really difficult to achieve in more traditional environments.
More recently, I’ve been learning Unity and Unreal Engine, exploring their strengths and weaknesses in terms of real-time interactive work, and XR. After writing shaders from scratch in Touchdesigner, I really appreciate the strides both of these engines have taken to make them more accessible to create with ( VFX Graph / Shader Graph in Unity and Niagara / Shader Blueprints in Unreal ).
Sometimes I do feel the need to unplug. I’m fortunate to live close to the ocean, so on stressful or long workdays I like to take a break to ride my bike, sit by the water and draw. I don’t have a goal in mind, or try to draw a specific thing, just shapes and lines that feel right. Aside from being a relaxing break, that practice often recharges me and helps me come back to the work at hand with a fresh outlook.
Tim Rolls, Vancouver, Canada