Share

Following on from last week's story on Paul Trillo’s short, Thank You For Not Answering, we probe deeper into the fascinating and groundbreaking development of the film. 

A mournful tale about a man's fading memories of lost love, the film was born of creative decisions and some "surreal side effects" of the process that the director says he cannot take credit for.

Paul Trillo – Thank You For Not Answering

Credits
powered by Source

Unlock full credits and more with a Source + shots membership.

Credits
powered by Source
Credits powered by Source

The press release states you wrote and ‘directed’ the film. Can you expand on those quote marks – how much is you and how much is Runway?

I was being a bit cheeky but, in a way, I do feel an odd relationship with taking full ownership of these AI outputs. I'm of course dictating the script, and using input images to guide the characters, the setting, the mood, lighting, colours, etc. I generated over 400 clips with Runway's Gen-2 and cut them in a very deliberate way. 

Edward Hopper and Wim Wenders were certainly influences for me as well as Wong Kar-wai and David Lynch.

However, there are a lot of decisions being made for me like the camera movement, the blocking, actions, lens choice, and other completely surreal side effects that I can't take full credit for. 

I am of course curating these outputs but it's a totally different form of decision-making than what I would typically go through.

The film has a very unified aesthetic – a surreal vintage, Edward Hopper-esque, Paris, Texas feel, did Runway curate all those images? 

Edward Hopper and Wim Wenders were certainly influences for me as well as Wong Kar-wai and David Lynch. In the prompting, I wrote things like "mood lighting" "surreal" "retro" "fuji film" "film grain" "70s" etc. to capture that type of mood. 

I don't have a great memory so when I try to recall a memory it feels like I'm swimming through the depths of the ocean.

I also created a bunch of still images with Stable Diffusion Automatic1111 running locally on my PC to get some cinematic source material. 

I broke the script up into a series of moments and used the still images as a form of storyboards.  Those images are then fed into Runway's Gen-2 and they get reinterpreted and spit back out in a similar but different form.

And also how did all that water get in there? Did you suggest a liquid/underwater theme to match the floaty dreamlike musings?

I don't have a great memory so when I try to recall a memory it feels like I'm swimming through the depths of the ocean. 

And when you recall a memory you are overriding your last recollection of that memory, so you are constantly eroding the events in your life. The mind being flooded felt like a good visual metaphor here. 

There was certainly a back-and-forth process between the writing and the creation which is one of the most exciting parts of using AI.

The flood water increases as the memories get cloudier and further from reality. I was also curious how Gen-2 would handle water because this is something is notoriously difficult to shoot and to simulate with VFX. I was shocked how well it could represent water, even though the physics are unrealistic, it still evoked the right feeling.

You state that the idea for the film came about prior the rise of AI tools. What was the original premise and how did it change with the AI input?

In 2020, I started writing a loose script about a man leaving a voicemail for someone from his past, a partner or lover or almost lover, who lived in the same apartment building as him. I always liked some of the lines I had written and came back and expanded on them. 

I still believe there is infinitely more value in working with real actors and having them contribute artistically to the performance.

Then as I was generating imagery, it would spark a new idea and I would rewrite the voiceover. So there was certainly a back-and-forth process between the writing and the creation which is one of the most exciting parts of using AI.

The VoiceOver is very lyrical and dramatic, can you tell us more about how you generated the audio using AI speech software Eleven Labs?

We've been conditioned for decades to expect a monotone computer-generated voice, so it is shocking to hear anything that resembles human emotion come out of these new voice-generation tools. I've used other text-to-voice tools in my previous experiments, I wasn't sure if these tools were going to work for this piece since the script called for something with remorse, regret, nostalgia, some sort of hidden backstory. 

The outputs can be surreal, uncanny, and sometimes downright horrific. Rather than trying to ignore these flaws, I find it interesting to embrace them and build concepts around those qualities.

I tried a few different voice samples until landing on Harry Dean Stanton's voice sampled from Paris, Texas which had the right tone and emotion imbued into it. 

I generated about 60 clips and edited the 'performance' sentence by sentence and sometimes word by word. One wrong intonation can throw the whole thing off. I still believe there is infinitely more value in working with real actors and having them contribute artistically to the performance. These tools should be used more as proof of concept rather than the final product.

Can you tell us more about the advantage of the aesthetic limitations of AI in producing the surreal and often uncanny content. Is this the beginning of a whole new art movement?

A lot of people are striving to recreate Hollywood blockbuster-level imagery perhaps out of a desire to feel like they can create something with production value they never had access to. However, I'm more interested in what AI offers that is different from traditional filmmaking and animation. It's a new type of aesthetic language that is a simulacrum of reality, a representation amalgamated from other representations. 

I'm more interested in what AI offers that is different from traditional filmmaking and animation.

The first Gen-2 piece I created touched on this idea. The outputs can be surreal, uncanny, and sometimes downright horrific. Rather than trying to ignore these flaws, I find it interesting to embrace them and build concepts around those qualities. We're already seeing an influx of AI content that makes use of these synthetic side effects, ‘post-photography’, there are a lot of names. 

Since we're no longer bound to reality and the only limitation is our own imagination, it makes sense that this is the direction things will go.

Can you expand on your comment “Gen-2 is the closest to a snapshot of a dream that I have ever seen.”?

There is an intrinsic dream-like quality to what AI can conjure as if it's remembering an image but forgetting the finer details of life. Dreams are our mind absorbing information and regurgitating it in a seemingly random structure.

 It's nonsense but our brain searches for meaning within the nonsense. 

People, places and objects shift and change organically and within a flash. AI functions on a sort of dream logic and gives us something that we could never capture with a camera. It's nonsense but our brain searches for meaning within the nonsense. 

Can you tell us about some of your other adventures in technology?

I have been incorporating technology in my work since nearly the beginning. I studied experimental film in college which probably plagued me with constantly wanting to play with new things. I also did visual effects and motion graphics work in college to pay the bills, so I also liked to nerd out of the technical stuff. 

When I get my hands on something new, I ask myself what can I do now that I could never have done before?

In Living Moments, I built a completely custom mobile bullet time rig with a custom app to trigger 50 cell phone cameras at once. We wheeled around New York and captured street photography in a completely new fashion. I made a short film entirely shot on an iPhone from the iPhone POV, following its "birth" to its "death" and reincarnation. 

I also made the first 10 minute single-take short film shot entirely with a drone. Which was incredibly challenging but exhilarating. As well as other drone experiments. I've always looked at what new creative possibilities, what new types of concepts can be built around emerging technology. 

When I get my hands on something new, I ask myself what can I do now that I could never have done before?

Share