How Will AI Transform the Film & TV Industry?

 

<div style=”padding:56.25% 0 0 0;position:relative;”><iframe src=”https://player.vimeo.com/video/1023671520?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479″ frameborder=”0″ allow=”autoplay; fullscreen; picture-in-picture; clipboard-write” style=”position:absolute;top:0;left:0;width:100%;height:100%;” title=”How Will AI Transform the Film &amp; TV Industry?”></iframe></div><script src=”https://player.vimeo.com/api/player.js”></script>

 

My video explainer called ‘How Will AI Transform the Film & TV Industry?’ aims to explore and speculate on the potential influence that generative AI could have on the entertainment industry in the future. I created this video for my semester 2 studio, Decoding AI: Automating Societies, which investigated the ways that ADM and AI systems intersect with our daily lives. Since entertainment forms like film and TV are so integral to our everyday habits, I chose to communicate through my video how generative AI could potentially ruin or improve these industries.

The main goals I had in mind while creating my video were to educate the audience and hold their attention. I hope that the topic I chose fascinated the audience as much as it fascinates me, and I hope that my interview with Dr Binns and the example of Fable Studios’ projects further captivated them. My intention was to inform the audience by making the video thought-provoking and engaging, cutting out unnecessary information while maintaining factual accuracy. During the editing process, I consistently ensured that the audience had something interesting to watch at all times: I did not allow a shot to run for too long without cutting to a different one. I also made sure that the visuals I displayed always related (at least somewhat) to the voiceover or interview audio.

The studio exhibition was the first chance I got to converse with my classmates about their videos after they were all completed. It was interesting to hear about everyone’s production processes, especially the differing and similar challenges they faced.

While I am very pleased with every aspect of my finished work, as it aligned closely to my initial vision of what the video would be, I think the most successful aspect is the interview. Dr Daniel Binns’ responses were perfect, both in providing expert insight and complementing my central ideas. He understood exactly what I was looking for and delivered articulate, detailed responses with a balanced perspective. I appreciated how he recognised and explained both the benefits and limitations of AI’s potential.

Despite being excellent, the interview contributed to the most problematic aspect of my process: the lengthy amount of time it took. The interview was around sixteen minutes, and while it was packed with brilliant content, it was far too long. I had to spend a lot of time choosing which sections to keep, and then dissecting them; erasing certain sentences and filler words (um, uh, you know, etc) to save time. Another thing that added to the extensive editing time was my lack of real-life b-roll footage. I should have filmed a piece-to-camera so that I wouldn’t have had to spend so much time searching for video clips online.

If I could extend my video explainer I would add two sections: one on where Showrunner gets its data, and another on what happens when Showrunner glitches. In my interview, I asked Dr Daniel Binns: “Where do AI platforms like Showrunner get their data and should people be able to opt out of having their data be used to train AI models?”. His response was great, sharing how he doubts they are using unlicensed data, and saying “I absolutely think that everyday people should be able to opt out of having their information used to train models” among further elaboration. I was disappointed when I had to cut this section as I feel it would have been thought-provoking for the audience, raising an important ethical question.

Another section I wish I could add back in is when I asked Dr Binns what Showrunner does when it glitches or makes an error. I asked him this question due to a paper he sent me prior to our interview called ‘The Allure of Artificial Worlds’, which explains how simulations like Showrunner “are built to absorb these glitches … these aberrant behaviours may become character or story arcs in and of themselves” (Binns 2024:3). Dr Binns expanded on this in the interview, which I wanted to keep in the video as it is something I never would have considered and I felt that the audience would share my fascination with the concept.

Before this studio, I was already aware of targeted advertising. I had heard stories about personal devices hearing people’s conversations and later giving them ads based on what they said, and other spooky accounts like that. However, I had no idea how extreme it gets until I learned about dark advertising in Decoding AI: Automating Societies. The idea of ads appearing only for specific targets, just to vanish moments later without leaving any record (Brownbill et al. 2022), is a fascinating concept that I will never forget. It made me wonder how many dark ads I’ve seen and thought nothing of, or how many have directly influenced me into buying a product. I also learned about the concerning lack of transparency measures among all of the major social media platforms. An unforgettable aspect of this that I will take into my future thinking is this worrying example provided by Brownbill et al. (2022): “we can’t be sure companies selling harmful and addictive products aren’t targeting children or people recovering from addiction”.

Through this studio I also gained a useful understanding of how algorithms choose to target various identity categories despite not being programmed to do so. I was very surprised to learn about the gender bias involved in Facebook’s algorithms. According to Trott et al. (2021:764), citing MIT Technology Review Insights 2013, they “may reinforce stereotyping by introducing gender bias into the distribution of online ads”.

Working collaboratively in this studio has been a very positive experience, not only during our Assignment #2 group project, but also in the way that everyone helped each other and shared ideas throughout the semester. My biggest takeaway from Decoding AI: Automating Societies in terms of collaboration is the importance of sharing contact details as early as possible at the start of a group assignment. My Assignment #2 group had an issue where we did not have contact with one of our members for a week or two, as we did not think it was urgent. The member then missed the next few classes, and we could not delegate tasks without talking to her. It all worked out and I was very happy with the short video explainer we made together, but I learned to collect contact details as soon as I get the chance in future projects.

 

 

References:

Binns D (2024) ‘The Allure of Artificial Worlds’, M/C Journal.

Brownbill A, Dobson AS, Robards B, Angus D, Hawker K, Hayden L, Carah N and Tan XY (2022) How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out, The Conversation, https://theconversation.com/how-dark-is-dark-advertising-we-audited-facebook-google-and-other-platforms-to-find-out-189310.

Trott V, Li N, Fordyce R and Andrejevic M (2021) ‘Shedding light on “dark” ads’, Continuum, 35(5):761–774, doi:https://doi.org/10.1080/10304312.2021.1983258.

Leave a Reply

Your email address will not be published. Required fields are marked *