FACSIMILE
- Write a written reflection of 1000 words (minimum) of analysis and please answer: What were you trying to achieve in terms of critically communicating about synthetic media and explain the method in which the editing process was used to attempt this?
In his 2023 paper, Garon highlights the transformative potential of synthetic media and how generative AI applications can reshape the way we interact with each other. This transformation is evident in various real-life examples, I have witnessed firsthand how AI-generated voices and avatars have become a driving force on social media platforms like TikTok and YouTube, to attracts attentions, leading to the emergence of AI generative content as a popular niche.
We are currently experiencing a significant global shift akin to historical moments like the introduction of sound in cinema or television, the establishment of YouTube, or the enforcement of music copyright laws that impacted skit comedy. Just like those milestones, we find ourselves at the precipice of a new era, where AI is reshaping our interactions and experiences. It is both scary and exciting as the same time. As Cat, our studio leader, puts it and I agree, we just got to find a way to live with it.
In my title sequence, I have constructed a world where AI has taken over the medium, leading the protagonist to fear the loss of control over his life. He is anxious about being cloned, erased from existence, or replaced, as AI has the capability to create anything, including human-like entities. However, the sequence deliberately leaves the viewer to interpret the protagonist’s experiences, playing with a dim, dystopian, and chaotic aesthetic that blurs the line between reality and a possible psychotic episode.
Through this edit, I intend to convey the importance of perspective in understanding AI’s potential impact. AI is a complex and rapidly evolving topic, and its implications depend on how we perceive and handle it—be it positively or negatively. While I have never had the intention of presenting a negative depiction of AI, the somber and unsettling aesthetic of the sequence aims to highlight a hypothetical world where the dangers of AI are imminent. Additionally, the sequence intends to provoke thought and reflection on the multifaceted nature of AI, inspiring viewers to consider the possibilities, dangers, and responsibilities that come with embracing this powerful technology in our evolving world.
It is crucial to acknowledge that synthetic media is not inherently harmful, but its increasing accessibility and sophistication amplify both its potential risks and benefits, as pointed out by PAI (The Partnership on AI) in their 2023 framework for responsible use. The responsible, ethical, and moral practices surrounding AI implementation can significantly influence its impact on society.
In addition to conveying a narrative, I had also incorporated AI applications in the post-production process, particularly in the creation of an infinite zoom-in montage using the AI generative program called Stable Diffusion. This tool allowed me to bring to life a clear depiction by providing prompts to the program. The sequence starts with a zoom-in on a robot head, symbolically diving into its brain, which then transitions to our world, showcasing popular landmark locations such as the Opera House and the Pyramid and others.
The montage further progresses into an industrialised city, where robots work in factories, highlighting automation but lacking the heart and soul that comes with human care and attention. This absence of human elements results in a post-apocalyptic setting. I was surprised to see that Stable Diffusion captured the essence of my imagination with extend precision. The high-quality images produced by the AI were beyond my initial expectations.
Recalling the time when I first explored text-to-image applications, I remember encountering complexity of codes, difficulty in navigating the programs, and low-resolution outcomes. Witnessing the significant advancements made in such a short period amazed me. The progress of AI in generating realistic and high-quality images has truly been a wild ride.
The incorporation of AI in post-production has proven to be a game-changer, enabling me to bring forth a visually compelling and thought-provoking narrative that pushes the boundaries of creativity and storytelling, without the constrictions of “copyrights, distribution rights, infringement claims, or royalties.” (Garon 2023) It further underscores the rapid progress of AI technology and its potential to reshape the way we create and interact with media, specifically its impact on the film industry.
- How did your preproduction/production/post production process go and what would you do differently/improve next time?
During the process of conceptualising the title sequence, I was inspired by the topic of AI in which we were taught in class. I came up with the initial premise of a protagonist facing the threat of losing his job and way of life due to the rise of AI. To bring this idea to life, I noted down my thoughts and brainstormed ways to convey the feeling of being replaced in the title sequence. Eventually, I decided to explore cloning visual effects, and transformative objects FX utilising Photoshop’s new generative AI application.
However, I encountered some challenges along the way. Finding suitable 3D models that were free proved to be difficult, so I resorted to using Polycam for the job. The process of photo scanning my talent became finicky and time-consuming, leading to pixelation in certain parts of the model. In an attempt to conceal this flaw, I employed tint effects and increased contrast, which not only masked the pixelation but also enhanced the overall mood of the edit, giving it a monotone and somber vibe.
Although after sharing the rough cut with the class. Unfortunately, the pixelated areas that I thought I had concealed were noticed by others, prompting me to reevaluate my approach. I made adjustments by moving elements around and increasing the scale to avoid the pixelated regions in the frame. This experience taught me the importance of being meticulous in post-production and acknowledging that even minor flaws can be noticeable to the audience.
For the other idea of involving Photoshop, I struggled to find the right direction for my talent’s actions. My initial concept involved the protagonist reaching for an object that would transform over time, but it felt a bit cliché and lacked of the uniqueness I was aiming for. After some contemplation, I devised a different approach by sourcing a library of images, downloading circle-shaped looking ones and creating a small clip that showcased the object’s transformation. Although this method proved effective, there was a hint of disappointment for not fully implementing generative AI in Photoshop as I had originally intended. Time constraints and technical challenges made it difficult to explore that avenue to its full potential. Nevertheless, I remain eager to try it out in the future, as I have witnessed some truly remarkable effects achieved with generative AI in the visual arts.
- Your reflection should also include commentary on what you thought the most and least successful parts of your Film Opening Title Sequence were, and why so? (This reflection should include the questions in dot points with answers in sentences/paragraphs).
One of the most successful aspects of this project was my improved time management skills. I am genuinely proud of myself for completing the final video a day before the deadline, and even having an extra day to work on this reflection. This was my first assignment in this studio where I didn’t have to ask for an extension, and that accomplishment means a lot to me. I consciously avoided being overly ambitious and stubborn with my ideas, and I learned to accept that some tasks, like learning a whole new process of Photoshop generative AI effects in a short amount of time, might be too challenging to take on. Instead, I made pragmatic decisions and found workarounds that still resulted in decent-looking effects. By doing so, I reduced unnecessary stress and pressure, allowing me to work more efficiently and effectively.
On the other hand, it became evident that certain parts were filmed, others were 3D scanned, and some used text-to-image generation. The contrast between the AI-generated parts and the non-AI parts was obvious and didn’t blend well together. This disjointedness was noticeable and affected the overall flow and consistency of the project. I realise the importance of developing a more unified approach to using AI in filmmaking. It’s not just about utilising the technology but also about ensuring that it enhances the overall storytelling, aesthetic and especially immersion. While I was able to integrated AI-generated elements in the video, I realised that I have a long way to go to achieve a seamless blend of various techniques and visual styles, and result in a more cohesive and unity looking final project.
Production:
References:
Garon J (27-29 April 2023) ‘A Practical Introduction to Generative AI, Synthetic Media, and the Messages Found in the Latest Medium’ [conference presentation], American Bar Association Business Law Spring Meeting, Seattle, accessed 4 August 2023. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4388437
PAI (Partnership on AI) (2023) PAI’s Responsible Practices for Synthetic Media, PAI website, accessed 4 August 2023. https://syntheticmedia.partnershiponai.org/#read_the_framework
Blog posts: