A5 pt2: Studio Review

Our documentary, The Righteous Gene, follows Professor Meg Elkins as she explores the effects of disinformation on the psyche, alongside a practical demonstration of these effects by means of an experiment on five random individuals. My hope is that by having an explanation, side by side with real human beings, our film engages the audience through their own identification and self-evaluation. The purpose of our documentary first and foremost is to inform, but we also wanted it to provoke the viewer to reflect on their own biases, sources, and literacy online. I believe that our film has this effect, and thus I hope this applies to our viewers.

I watched it at the opening night of the And Scene festival with my best friend, and without knowing it was my group’s film, she turned to me and said “I find this so interesting and scary; that we always think we’re right”. Clearly the film had provoked her to assess her own positioning and experiences, exactly as we wanted it to. By means of using real people to express emotion, having a professor break down concepts of disinformation and how it polarises people, and having a simplistic style, I believe our piece effectively communicates the key concern of what disinformation is, how it occurs, and how we, as humans, relate to it.

In adapting this work, I believe it would serve as an excellent foundation for creating a short film guide on “how to navigate disinformation online.” Dr. Elkins’ core message is to provide viewers with the tools to protect themselves online. However, the documentary format primarily raises questions for viewers to contemplate, rather than directly conveying this message.

As a short guide, the film could adopt a more structured approach, covering sections such as: what disinformation is, how to analyse sources, identifying clues of disinformation, using credible platforms, and having important conversations. A user-friendly guide presented by a single professional would help make this complex and evolving topic more understandable. This straightforward approach would aim to foster media literacy by explaining key concepts and providing a clear action plan for users.

Our studio Truth Be Told delved into the rise of disinformation and artificial intelligence, and its impacts on society as a whole. A big question for us as students was, and is, ‘where is this all heading?’ This helped us to navigate the studio question of “How might documentary be a useful tool to explore (and explain) disinformation?”

Fact and Fabrication by Silas Sermersheim, Tristan Buesst, and Nhat Nguyen’s, took the approach of exploring the effect disinformation has on political spheres within Australia, and more so how disinformation can distort social cohesion and trust in governments. Their film uses Professor Sushi Das to explain disinformation and its impacts, in specific relation to The Voice referendum. Using a real world example, one that is close to Australians, is an effective way of exploring disinformation in a way that feels accessible, and familiar, to viewers. Fact and Fabrication addressed the content of the Truth Be Told studio by defining what disinformation is, and exploring the ways in which it can negatively affect our society, as well as posing solutions for the future. 

How to Wreck an Ice Peach By Anika Luna, Dionne Yiangoulli, Putt-Putt Quanpadung, and Luca Corrado, differed from this approach. Following Professor Mark Sanderson, their film set out to explore the ways in which AI is used online, and explains how it functions in these online spaces. The film is really successful in defining the spaces that AI has permeated, some that online users might not be aware of, and exploring how humans and AI interact to co-create content. In contrast to Fact and Fabrication, which explores the effects on collectives, their films successfully poses questions about the effects on, and role of, the individual. How to Wreck an Ice Peach identified the rising role of AI in our society and outlined the ways in which it might shape our future, causing the viewer to explore their own relationship with AI.

Visual Blueprint is a studio that immediately stuck out to me. After watching a handful of films from the studio, I understood that the key focus was on visual style and tone. The films from this studio were very artistic, and with a wide range of content, the films maintained a through line quality about them; the attention paid to stylistic detail. Although all of the films had a story driving the film forward, the narrative acted as a framework for visual pleasure to take place. 

A prime example of this was the film Decadence, By Spring Li, Taylor Zenelovski, Andrew Tan, Nadia Harari, and Josephine Gaal. The film follows an Artist and her Muse, and the unravelling of their relationship. Decadence exhibited the studio’s emphasis on visual style. While there was a narrative to establish context, the film predominantly communicated themes and emotions through visual elements, rather than dialogue.

The rich, dramatic visuals, along with a compelling soundscape, convey the mood and tone of the Artist effectively. As an audience, we are aware of the Artist’s feelings and intentions without extensive dialogue or explanation, and are solely informed through style. This makes for a more interpretive viewing experience, engaging the viewer on a deeper, more emotional level. Decadence is such a successful film in displaying Visual Blueprint studios importance of visual style, creating a work that is not only aesthetically pleasing, but also that resonates emotionally.

A4: Final Artefact

 

Dr Meg Elkins is alarmed by the toll the internet is taking on all of our minds. When we see disinformation online, we tend to believe it for a variety of psychological reasons. As one of the pioneering minds in the fields of disinformation and psychology, Meg empowers ordinary citizens with the tools to approach information critically. THE RIGHTEOUS GENE is a five-minute documentary that illustrates the intricate workings of the human mind when confronted with disinformation and underscores the need for caution when navigating the internet.

🎀

FEATURING 

Dr Meg Elkins  

 

Amay Iyer 

Bradley G Graham  

Chi Hoang  

Mitchell Boessen  

Zachariah Clarkson  

   

DIRECTOR 

Abigail Smith 

 

EDITOR 

Danielle Atherton 

Azra Omar   

 

HEAD CAMERA OPERATOR 

Phoebe Hewertson  

 

CINEMATOGRAPHER 

Danielle Atherton  

 

SOUND RECORDIST & GAFFER 

Azra Omar  

 

SOUND MIX 

Phoebe Hewertson 

 

COLOUR GRADE 

Abigail Smith    

 

Animation created with Vyond  

https://www.vyond.com/ 

 

   Original music composed by  

Matthew Begbie   

 

With thanks to  

The RMIT Tech Department & Oliver Pateras   

 

Studio Instructor 

Rohan Spong   

 

Created as part of  

TRUTH BE TOLD 

School of Media and Communication 

RMIT University 

2024 

#4: Consideration of the Final Artefact

Overall, I think our piece was really polished for the amount of time we made it in and I feel that the effort we put into fixing things, even if it required reshooting, re-editing and finding new audio, paid off. The core things I would want to change would be how we set-up the experiment (black and white scenes) in the story, and the animations throughout the film. If we were to adapt this film into a different kind of work, I would like to have the piece be a longer film set up as a “how to guide” on navigating disinformation online. 

The animations were a necessary element in our piece to add texture, and to link Dr. Elkins remarks with visual representations. The platform of Vyond, as a video animation creator, allowed us to do this in the amount of time which was required of us, however, if we planned to enter the work in a festival or work on it further, I would want to recreate these segments ourselves. I think with the affordance of extra time we could have shot the scenes ourselves during subsequent shoots, or we could have customised the animations more towards the film’s aesthetic. Additionally, due to cost we opted in for the lower quality animations, but if we were to be entering the work in festivals we would have paid the excess. 

One of the biggest issues we had was a lack of introductory and varied footage from the experiment interviews. This meant that we had to go back to the studio multiple times to try and emulate the setting of the initial shoot. If we had extra time, it would have been beneficial to do a second shoot with the participants to capture more establishing shots, rather than editing together separate shots to create a scene. It would also mean that we would be able to slow the pace down through longer shots, and create a more totalising introduction. Additionally,  we would not have had to explain the experiment through text on the screen, instead we could have used a real shot where Abby explains it to the participants and we see their reactions in realtime. As well as looking better, I think this alternative would have conveyed the message more seamlessly and professionally. 

In adapting the work, I think this piece would be a good base for making a “how to navigate disinformation online” short film guide. Dr. Elkins through line message was to equip viewers with tools to protect themselves online, but I don’t think the documentary format explicitly communicates that, instead it poses more questions to the viewer to inform their own reflections and actions. As a short guide, the film could be more formulaic in sections, such as: what is disinformation, how to analyse sources, what clues to look out for, credible platforms to use, conversations to have, etc. I think it would be really good to have a user-friendly guide that is communicated through one professional, to make this new and rapidly adapting topic understandable. This is a simple approach to helping users online, by explaining and thus establishing media literacy, and making a clear action plan forward.

#3: The Emergence of AI and Disinformation

 

The TRUTH BE TOLD course has opened my eyes to the emergence of Artificial Intelligence and the prevalence of disinformation, something I was blissfully ignoring before. I chose this class because I knew that, even though the topic unsettles me, it is something that we all should be aware of, especially as media makers — creating media at the time that it is all developing. 

All eyes on (an AI-generated pic of) Rafah - triple j

Recently, AI art has emerged online in relation to the genocide in Palestine. Instagram users have been sharing to their stories an AI generated image of a Palestinian mass displacement camp, in which nondescript bodies in bags spell out “All Eyes on Rafah”. Although it is important for people to be talking about Palestine and spreading awareness, I think there are problems associated with linking support to an AI image. There have been so many videos, photos, and stories that have been shared by Palestinians and journalists that show the real horror of what is happening in Palestine, and yet the image that has gained the most traction is one that does not depict any of it. An image that shows no humanity, nor reality, one that is palatable to all. It is scary to know that people would rather sterilise this situation to an AI image when Palestinians have to see the real, traumatising scenes all day, everyday. Hence my fear with AI is that it will remove humanity from news sectors and online spaces. Which would have an impact on people’s opinions and attitudes towards things that do not immediately affect them, thus negatively impacting our society as a whole.  This emergence of AI into political spaces would also affect the credibility of news sources, which could result in a lack of public trust (something I feel is already declining). 

I predict that the effects of AI and disinformation will only get worse before they get better, as AI is still rapidly evolving and there is no cease of progression in sight. I predict that the worst of the negative effects will be in relation to politics, public matters, pop culture, and scamming.  These predictions are in relation to disinformation and AI content that has already been published and had real world effects. Think: The Voice Referendum, Trumps claims and the effects on the American Democratic processes, COVID, the genocide in Palestine, and even novel examples like the Kate Middleton ‘disappearance’. 

I predict that the prevalence of AI online will result in more regulation online by means of public disapproval. I think it’s so easy to get lost and confused online, and I think these effects on public trust will become apparent over the next few years, especially as AI video and audio generators become more user-friendly, advanced, and readily available. 

In her lecture, Sushi Das shared that one of the issues with fact checking online is the scope of information/posts being published. I predict that, through a desire to use AI for good, there will be more human and AI collaboration. My hope with this is that AI could be used as a tool to mass fact check online platforms and alert users to posts that do, or could, contain misinformation, disinformation or AI. Using AI to negate AI, that’s the dream. 

I think there is real importance in Dr. Elkins simple message to ask ‘why’ and to be open to different points of view through having difficult conversations (personal communication, May 8 2024). My hope is that more conversations surrounding the effects of AI and disinformation would result in public resistance to AI and disinformation, leading towards a social shift. I hope that this shift would involve enhanced media literacy, more regulation on social media platforms, distinct labelling of AI contributions/creations, and a movement towards humanism within online spheres. 

Sources

@Shahv4012 (2024) All Eyes On Rafah [AI generated image], Instagram website, accessed 30 May 2024.

#2: Reflection on Collaboration

I played an important role in my group by doing the sound design, camera operating, having the music composed for our film, consistently showing up and putting in extra hours, and by being amenable throughout the whole process. This assignment gave me the opportunity to take on more of a collaborative rather than leadership role, as there were people with stronger visions than me in regards to the style and story of the film, as well as more experience. That’s not to say that I did not have an input but more so that I feel I contributed through showing different perspectives and possible avenues to take, which as a result meant our group was more confident in the path we did follow. 

I’ve always found the creation of a final film the hardest aspect of studios, which is usually a result of the challenges of creative differences — as creating media is generally artistically driven — clashing schedules, differing strong points/experience in media making and technical limitations; it is difficult to share and co-create on software like premiere. I think I found throughout this project that I didn’t have the same level of experience on PremierePro or understanding of all the editing processes on there. This is obviously an omission in my media making skills and I will need to work harder to teach myself PremierePro conventions outside of class times, or team up with people with similar levels of experience — so that my team members don’t have to teach me. It was very valuable having someone like Dani in our group, as she really knew all the technical aspects and maintained a vision throughout the whole making process. I want to be more proactive in my contributions through having a better understanding of the software, the role I will be fulfilling and investing in tools that make content sharing easier. 

In the future, I want to have more of a role in the editing, to learn better habits with big edits and because I think that group work flows better when everyone is together putting in the same amount of time, rather than taking separate parts home to work on (which vary in size). I feel like this was also a result of a lack of defined roles when starting our project together; we rarely stuck to set roles and I feel like this made completing the assignments harder due to a lack of clarity/accountability.  Therefore, next assignment I want to allocate more time to working within the editing suites as a group, which would also mean that one person isn’t having to navigate the whole piece off the bat — Thankyou Dani. To aid this, I think it would be good to have a more defined conversation before starting a group project about what set times weekly people can spend together editing outside of class time. 

#1: Reflecting on Documentary as a Consciousness-raising Tool

Documentaries are valuable modes for exploring, explaining and combating disinformation by means of presenting factual information in an easily consumable format. The documentary format directly relates to the films made in this course in that the necessity for truth, and exposing the truth, is central to their content, and similarly to the existence of documentaries. 

In the first reading for this course, Lee McIntyre (2018) outlined the importance of viewers’ interrogation of the information they consume, and to look at facts shared from within their context, not as they are presented to you. Through this piece, McIntyre (2018) deduced that the root of the problem was that facts are hard to distinguish in the modern, globalised world. His writing sought to bring attention to the necessity of truth and the ways in which truth can be contorted. Our interview with Dr. Meg Elkins turned out to be a direct response to this observation. Without knowing it, McIntyre’s writing had pathed the way for our documentary to take place. 

We wanted our documentary to be an exploration on the effects of disinformation on the psyche, and through this initial prompt our documentary dove into how disinformation confirms biases. Dr. Elkins defined the origins of disinformation and sought to explain the ways in which we, as consumers of information, are susceptible to it. Through this exploration and explanation, Dr. Elkins humanised a problem that feels so distant and scary to many. She removed the barrier of the screen through explaining social contexts that influence the ways in which we respond to information, cementing McIntyre’s (2018) point that facts are more emotional than we imagine. I think the use of a professor discussing a topic that is foreign, through clear use of research and relation to human experiences, achieves the goal of explaining disinformation and goes further to make the content less confronting. This last point is important because a lack of information relates to the fear surrounding disinformation, and directly relates to the dissemination of disinformation. As Dr. Elkins remarked “Anger and fear are much bigger motivators to make us click, share, read, [and] like” (personal communication, May 8 2024) . 

She goes on to say that “asking ‘why’, and going deeply into the ‘why’” is how people can distinguish truth from lies (Elkins, personal communication, May 8 2024). These assertions of advice are helpful for the viewer; through posing direct questions she is placing the onus on the individual watching the documentary to assess their own circumstances. In such a digital world, I think everyone would benefit from watching our film, but maybe more specifically younger people who are chronically online and susceptible to disinformation.

Through education we are taught to assess our sources and question the information we consume, but not everyone is afforded the privilege of education or have not yet gone through said education. That is to say that anyone online, who consumes information, or anyone who is marginalised from/inexperienced with technology, would benefit from understanding the causes and effects of disinformation. Additionally, students and educators within the Communications realm can benefit from this film by learning to analyse the content they publish for misinformation, disinformation and bias. As up and coming content creators and educators, we have to be aware of the current problems within our sphere and learn ways to negate perpetuating harm. 

 

Sources

McIntyre L (2018) Post-Truth, MIT Press, Cambridge. 

#3 ASSIGNMENT: To Do List 

  • Change everything that says Righteousness to Righteous  
  • At 15 seconds, remove the shot (Abi’s shot) 
  • Add typing sounds over title sequence 
  • Motion stabilize the opening 
  • When title fades out, change music to atmosphere 
  • Change the cut to RMIT (too abrupt)  
  • Establishing shot of Bowan street 

 

For the setup of the experiment:  

  • Table, Abi, explained,  
  • Super cut of their reactions 

 

  • Download final animations in high quality (done) 
  • Clean transitions – L and J cuts  
  • Add music and sound effects to empty spaces. 
  • Clean up vocal transitions of Megs interviews.  
  • Cut number of animations during dopamine section (done) 
  • Cut text on Johnathan Haidt clip and Press Secretary scene (done) 
  • Shoot ‘fight’ scene – to add at 5.30minute as linking footage  
  • Shoot interview intro shot of two pieces of information: typewriter text over top explaining experiment 
  • Shoot RMIT hallway – intro to Meg office 
  • Shoot scrolling phone scenes to reduce reuse of shots.  
  • Delete participants introductions and change to silent shots of their faces (as an introduction)  
  • Add title for Meg upon introduction 
  • Add above shots into their blank spaces.  
  • Clean all black and white filters.  
  • Colour correction.  
  • Fix up Sound design.  
  • Finish on Chi, lights turn off and then cut to Meg 
  • Add in credits and attributions. 

Benefits of AI ? Ramble

Could AI be harnessed for good? Could it even be used in your major assignment? Are you aware of any ethical issues with using generative AI? Can any of these issues be mitigated in some way? 

I take a bit of a pessimistic approach to AI, maybe out of fear but majoritively out of not understanding what its benefits could be. I fear that it will make people lazier and cull peoples much needed jobs. I think it’s a shame for a few small people to profit massively off a piece of technology that will cost thousands (if not more?) of people their jobs. I suppose that is technological progression…

To the point:

On an individual level, I think AI could be used for good to allow for time efficiency, through eliminating trivial and tedious tasks for the user, thus allowing them more time to focus on the bigger job at hand. However, I do think it is important for the user to be capable of doing these tasks on their lonesome before palming them off to AI. I think the key for integrity is that the user should have an understanding of what is being done by the AI, and they should only use it as a tool for time saving. This obviously involves a high-degree of self control and accountability, especially when there are (currently) few checks in place to distinguish AI work from that of humans. What will all this new time be used for though? And will the task distributor be aware of this use of AI?

It is hard to say whether AI should be used in our major assessment. Going by my prior philosophy, I think we, as a group, and individually, should be able to complete all of the components necessary rather than using AI. For example, it would be easy to use AI to formulate a agreement contract, propose a project timeline or research for us, but shouldn’t we be able to do these things ourselves? Why pay thousands of dollars for a degree if not to learn?