#3: The Emergence of AI and Disinformation

 

The TRUTH BE TOLD course has opened my eyes to the emergence of Artificial Intelligence and the prevalence of disinformation, something I was blissfully ignoring before. I chose this class because I knew that, even though the topic unsettles me, it is something that we all should be aware of, especially as media makers — creating media at the time that it is all developing. 

All eyes on (an AI-generated pic of) Rafah - triple j

Recently, AI art has emerged online in relation to the genocide in Palestine. Instagram users have been sharing to their stories an AI generated image of a Palestinian mass displacement camp, in which nondescript bodies in bags spell out “All Eyes on Rafah”. Although it is important for people to be talking about Palestine and spreading awareness, I think there are problems associated with linking support to an AI image. There have been so many videos, photos, and stories that have been shared by Palestinians and journalists that show the real horror of what is happening in Palestine, and yet the image that has gained the most traction is one that does not depict any of it. An image that shows no humanity, nor reality, one that is palatable to all. It is scary to know that people would rather sterilise this situation to an AI image when Palestinians have to see the real, traumatising scenes all day, everyday. Hence my fear with AI is that it will remove humanity from news sectors and online spaces. Which would have an impact on people’s opinions and attitudes towards things that do not immediately affect them, thus negatively impacting our society as a whole.  This emergence of AI into political spaces would also affect the credibility of news sources, which could result in a lack of public trust (something I feel is already declining). 

I predict that the effects of AI and disinformation will only get worse before they get better, as AI is still rapidly evolving and there is no cease of progression in sight. I predict that the worst of the negative effects will be in relation to politics, public matters, pop culture, and scamming.  These predictions are in relation to disinformation and AI content that has already been published and had real world effects. Think: The Voice Referendum, Trumps claims and the effects on the American Democratic processes, COVID, the genocide in Palestine, and even novel examples like the Kate Middleton ‘disappearance’. 

I predict that the prevalence of AI online will result in more regulation online by means of public disapproval. I think it’s so easy to get lost and confused online, and I think these effects on public trust will become apparent over the next few years, especially as AI video and audio generators become more user-friendly, advanced, and readily available. 

In her lecture, Sushi Das shared that one of the issues with fact checking online is the scope of information/posts being published. I predict that, through a desire to use AI for good, there will be more human and AI collaboration. My hope with this is that AI could be used as a tool to mass fact check online platforms and alert users to posts that do, or could, contain misinformation, disinformation or AI. Using AI to negate AI, that’s the dream. 

I think there is real importance in Dr. Elkins simple message to ask ‘why’ and to be open to different points of view through having difficult conversations (personal communication, May 8 2024). My hope is that more conversations surrounding the effects of AI and disinformation would result in public resistance to AI and disinformation, leading towards a social shift. I hope that this shift would involve enhanced media literacy, more regulation on social media platforms, distinct labelling of AI contributions/creations, and a movement towards humanism within online spheres. 

Sources

@Shahv4012 (2024) All Eyes On Rafah [AI generated image], Instagram website, accessed 30 May 2024.

#2: Reflection on Collaboration

I played an important role in my group by doing the sound design, camera operating, having the music composed for our film, consistently showing up and putting in extra hours, and by being amenable throughout the whole process. This assignment gave me the opportunity to take on more of a collaborative rather than leadership role, as there were people with stronger visions than me in regards to the style and story of the film, as well as more experience. That’s not to say that I did not have an input but more so that I feel I contributed through showing different perspectives and possible avenues to take, which as a result meant our group was more confident in the path we did follow. 

I’ve always found the creation of a final film the hardest aspect of studios, which is usually a result of the challenges of creative differences — as creating media is generally artistically driven — clashing schedules, differing strong points/experience in media making and technical limitations; it is difficult to share and co-create on software like premiere. I think I found throughout this project that I didn’t have the same level of experience on PremierePro or understanding of all the editing processes on there. This is obviously an omission in my media making skills and I will need to work harder to teach myself PremierePro conventions outside of class times, or team up with people with similar levels of experience — so that my team members don’t have to teach me. It was very valuable having someone like Dani in our group, as she really knew all the technical aspects and maintained a vision throughout the whole making process. I want to be more proactive in my contributions through having a better understanding of the software, the role I will be fulfilling and investing in tools that make content sharing easier. 

In the future, I want to have more of a role in the editing, to learn better habits with big edits and because I think that group work flows better when everyone is together putting in the same amount of time, rather than taking separate parts home to work on (which vary in size). I feel like this was also a result of a lack of defined roles when starting our project together; we rarely stuck to set roles and I feel like this made completing the assignments harder due to a lack of clarity/accountability.  Therefore, next assignment I want to allocate more time to working within the editing suites as a group, which would also mean that one person isn’t having to navigate the whole piece off the bat — Thankyou Dani. To aid this, I think it would be good to have a more defined conversation before starting a group project about what set times weekly people can spend together editing outside of class time. 

#1: Reflecting on Documentary as a Consciousness-raising Tool

Documentaries are valuable modes for exploring, explaining and combating disinformation by means of presenting factual information in an easily consumable format. The documentary format directly relates to the films made in this course in that the necessity for truth, and exposing the truth, is central to their content, and similarly to the existence of documentaries. 

In the first reading for this course, Lee McIntyre (2018) outlined the importance of viewers’ interrogation of the information they consume, and to look at facts shared from within their context, not as they are presented to you. Through this piece, McIntyre (2018) deduced that the root of the problem was that facts are hard to distinguish in the modern, globalised world. His writing sought to bring attention to the necessity of truth and the ways in which truth can be contorted. Our interview with Dr. Meg Elkins turned out to be a direct response to this observation. Without knowing it, McIntyre’s writing had pathed the way for our documentary to take place. 

We wanted our documentary to be an exploration on the effects of disinformation on the psyche, and through this initial prompt our documentary dove into how disinformation confirms biases. Dr. Elkins defined the origins of disinformation and sought to explain the ways in which we, as consumers of information, are susceptible to it. Through this exploration and explanation, Dr. Elkins humanised a problem that feels so distant and scary to many. She removed the barrier of the screen through explaining social contexts that influence the ways in which we respond to information, cementing McIntyre’s (2018) point that facts are more emotional than we imagine. I think the use of a professor discussing a topic that is foreign, through clear use of research and relation to human experiences, achieves the goal of explaining disinformation and goes further to make the content less confronting. This last point is important because a lack of information relates to the fear surrounding disinformation, and directly relates to the dissemination of disinformation. As Dr. Elkins remarked “Anger and fear are much bigger motivators to make us click, share, read, [and] like” (personal communication, May 8 2024) . 

She goes on to say that “asking ‘why’, and going deeply into the ‘why’” is how people can distinguish truth from lies (Elkins, personal communication, May 8 2024). These assertions of advice are helpful for the viewer; through posing direct questions she is placing the onus on the individual watching the documentary to assess their own circumstances. In such a digital world, I think everyone would benefit from watching our film, but maybe more specifically younger people who are chronically online and susceptible to disinformation.

Through education we are taught to assess our sources and question the information we consume, but not everyone is afforded the privilege of education or have not yet gone through said education. That is to say that anyone online, who consumes information, or anyone who is marginalised from/inexperienced with technology, would benefit from understanding the causes and effects of disinformation. Additionally, students and educators within the Communications realm can benefit from this film by learning to analyse the content they publish for misinformation, disinformation and bias. As up and coming content creators and educators, we have to be aware of the current problems within our sphere and learn ways to negate perpetuating harm. 

 

Sources

McIntyre L (2018) Post-Truth, MIT Press, Cambridge. 

#3 ASSIGNMENT: To Do List 

  • Change everything that says Righteousness to Righteous  
  • At 15 seconds, remove the shot (Abi’s shot) 
  • Add typing sounds over title sequence 
  • Motion stabilize the opening 
  • When title fades out, change music to atmosphere 
  • Change the cut to RMIT (too abrupt)  
  • Establishing shot of Bowan street 

 

For the setup of the experiment:  

  • Table, Abi, explained,  
  • Super cut of their reactions 

 

  • Download final animations in high quality (done) 
  • Clean transitions – L and J cuts  
  • Add music and sound effects to empty spaces. 
  • Clean up vocal transitions of Megs interviews.  
  • Cut number of animations during dopamine section (done) 
  • Cut text on Johnathan Haidt clip and Press Secretary scene (done) 
  • Shoot ‘fight’ scene – to add at 5.30minute as linking footage  
  • Shoot interview intro shot of two pieces of information: typewriter text over top explaining experiment 
  • Shoot RMIT hallway – intro to Meg office 
  • Shoot scrolling phone scenes to reduce reuse of shots.  
  • Delete participants introductions and change to silent shots of their faces (as an introduction)  
  • Add title for Meg upon introduction 
  • Add above shots into their blank spaces.  
  • Clean all black and white filters.  
  • Colour correction.  
  • Fix up Sound design.  
  • Finish on Chi, lights turn off and then cut to Meg 
  • Add in credits and attributions. 

Benefits of AI ? Ramble

Could AI be harnessed for good? Could it even be used in your major assignment? Are you aware of any ethical issues with using generative AI? Can any of these issues be mitigated in some way? 

I take a bit of a pessimistic approach to AI, maybe out of fear but majoritively out of not understanding what its benefits could be. I fear that it will make people lazier and cull peoples much needed jobs. I think it’s a shame for a few small people to profit massively off a piece of technology that will cost thousands (if not more?) of people their jobs. I suppose that is technological progression…

To the point:

On an individual level, I think AI could be used for good to allow for time efficiency, through eliminating trivial and tedious tasks for the user, thus allowing them more time to focus on the bigger job at hand. However, I do think it is important for the user to be capable of doing these tasks on their lonesome before palming them off to AI. I think the key for integrity is that the user should have an understanding of what is being done by the AI, and they should only use it as a tool for time saving. This obviously involves a high-degree of self control and accountability, especially when there are (currently) few checks in place to distinguish AI work from that of humans. What will all this new time be used for though? And will the task distributor be aware of this use of AI?

It is hard to say whether AI should be used in our major assessment. Going by my prior philosophy, I think we, as a group, and individually, should be able to complete all of the components necessary rather than using AI. For example, it would be easy to use AI to formulate a agreement contract, propose a project timeline or research for us, but shouldn’t we be able to do these things ourselves? Why pay thousands of dollars for a degree if not to learn?

A1 Post 3:  Reflection on Deadpool Kate Middleton

For my A1 Post 2, I created four images of Kate Middleton; one of her with actor Hugh Jackman, and three depicting her as the Marvel character Deadpool. I used the website HotPot to generate these Ai images, which uses the OpenAi program. The HotPot Ai image generator is free and I was able to choose the image style, put in a small prompt and the images popped up within seconds. I had to tweak the prompts a few times to get the image I was desiring, as sometimes the Ai ignored my request all together or only produced an image for half of the prompt, but for the most part it was adequate. 

Given the nature of this task, I thought these images would be sufficient and that the generator served its purpose, though I do not think these photos would disinform most people if presented without a disclaimer. I think the images are simply not visually coherent, nor thematically conceivable, and would not convince anyone who is not visually impaired or anyone who knows anything about the Royal Family and Marvel Studios. The images are highly satirical and I feel that that is obvious; a disclaimer should not have to reassure that for most people. I think it’s plausible that some people could believe these images if they had poor vision, a disinterest in current affairs and/or poor online literacy, and there are definitely people who fall into these categories online because of the internet’s accessibility; most people are online.  

Ai generators are so obtainable and user friendly, and I think these factors make it impossible to ensure people won’t use this software for inciting harm or spreading disinformation, especially about public figures and about the ongoing wars/invasions happening globally. In such a polarised era, it seems apparent that people would use – and are using – Ai generators to fabricate evidence that backs up their opinion or agenda. We have seen this in the wake of the invasion of Palestine and war in Ukraine; people – hackers, governments, businesses –  falsifying existing images for propaganda or public disillusionment (see Inside the Kremlin’s Year of Ukraine Propaganda). Deepfakes have been used to depict country leaders stating something, converse to their genuine view, that aids their oppositions; showing that misuse of Ai generated content is capable of inflicting great harm (Bergengruen 2023). 

In creating any fictional art of others, I feel there can be ethical concerns. Though, in the case of Ai generated artefacts, I think the content is more realistic and thus holds more weight (potential impact). A concern therefore would be the real world effects, of sharing the artefact, on the people/object depicted in the work. The content could even fall into invading the privacy of the person, or defamation, if they were to take offence. In that sense, another concern would be properly attributing the use of Ai to the creation of the image, and making it clear the artefact is a work of fiction; this can be done through a disclaimer and clear referencing to the software/author used. 

References

Bergengruen V (2023) Inside the Kremlin’s Year of Ukraine Propaganda, Time website, accessed 20 March 2024. https://time.com/6257372/russia-ukraine-war-disinformation/

A1 Post 2: Disinformation Artefact: Deadpool Kate Middleton

Kate Middleton Spotted in London with Australian Actor Hugh Jackman

15 March 2024

After weeks of speculation surrounding Kate Middletons health and whereabouts, she was spotted with husband William heading to Westminster Abbey last week. Since the image of the couple surfaced last week, things have a taken a bizarre turn that nobody expected.

fig. 1

The paparazzi’s that have long been scouring to snap a shot of Kate have graced us with an image of Hugh Jackman and Kate Middleton leaving a bar in London city centre just hours ago. The photo has sparked more controversy, confirming for some that her disappearance was a result of infidelity, though no one thought it would be her that was cheating.

Kate Middleton the New Deadpool?

22 March 2024

fig. 2

It has now been a week since Kate Middleton was spotted with Hugh Jackman, and neither of them, nor the Royal Family, have spoken out about just what in the world is going on, yet the pieces are slowly coming together. Just this morning, an anonymous  member from the crew of the new Marvel film, Deadpool & Wolverine (set to release in July 2024), has posted a photo that has gone viral. The photo appears to depict a woman in the Deadpool costume, which has caused public confusion, as Ryan Reynolds is the face behind the infamous character. Conspiracies are growing and people have began to spread the belief that this masked woman is in fact Kate Middleton herself. We here at SpillingTheTea News don’t find it too far fetched of a theory, as no other explanations have been provided and none other than Hugh Jackman himself plays the Wolverine. Their rendezvous makes sense now, doesn’t it?

What is Kate Up To?

23 March 2024

fig. 3

The anonymous poster has struck again, this time confirming that what we all suspected was true; Kate Middleton was the female in the Deadpool outfit. The new image shows Kate Middleton dressed as Deadpool, this time without the mask, driving away from the set of Deadpool & Wolverine. So, what does this mean? and when will the Royal Palace speak out about it?

The Royal Palace Confirms; Kate Middleton is the New Deadpool

25 March 2024

fig. 4

In one Twitter post alone, the Royal Family has brought these weeks of speculation about Kate Middletons whereabouts to an end. Here you have it folks, it was all a publicity stunt! She is alive and well and an actress now too apparently? Not a good look for the integrity and legitimacy of the Royal Family, but nonetheless we’re sure it’ll drive the films sales. We’re not sure who designed the Deadpool themed gown, but it’s not a good look for them either.

That’s all from us here at SpillingTheTea News, finishing this three month long segment special on Kate Middleton’s whereabouts. All we know for sure is that we never want to here that name again! 

References 

Stuart R and Rimmer M (2024) Kate Middleton apologises over royal photoshop fail, saying she ‘experimented with editing’, ABC News website, accessed 12 March 2024. https://www.abc.net.au/news/2024-03-11/princess-of-wales-sorry-over-royal-photoshop-fail/103575140 

Figure 1: Image generated using OpenAI’s DALL-E 2 from the prompt: Kate Middleton and Hugh Jackman caught by paparazzis.

Figure 2: Image generated using OpenAI’s DALL-E 2 from the prompt: Kate Middleton in a Deadpool outfit on a film set.

Figure 3: Image generated using OpenAI’s DALL-E 2 from the prompt: Kate Middleton in a fancy dress with a Deadpool outfit underneath.

Figure 4: Image generated using OpenAI’s DALL-E 2 from the prompt: Kate Middleton caught in Deadpool outfit hiding in car.

*This blog post is a work of fiction. It contains disinformation about Kate Middleton as Deadpool and was created by OpenAi using HotPot Ai. It is presented here as part of the coursework for TRUTH BE TOLD, a studio in which students consider disinformation. See: COMM2626, RMIT University, Melbourne.*

A1 Post 1: Reflection on the Reading

1.

McIntyre’s chapter What Is Post-Truth? (2018) defines the ways in which truth can be subverted, the differences and magnitudes of these subversions, and the real world impacts of these truth subversions, or Post-Truths. The reading deals with examining intent and belief of the people spreading “truths” based on their emotions — and the emotions of others — to add to their argument, using Donald Trump as the main case study. More generally, the reading pushes the reader to consider the information they consume and look at facts shared from within their context, not as they are presented to you (out of context) (McIntyre 2018). McIntyre imparts a disconcerting realisation, in an overwhelming individualised age: “the real problem here, I claim, is not merely the content of any particular (outrageous) belief, but the overarching idea that—depending on what one wants to be true—some facts matter more than others.” (McIntyre 2018:10). The opening chapter sets-up the groundwork for an argument that denounces the distortion of truth and seeks to bring meaning back to facts and science. McIntyre sets out to understand how and why this occurred and what can be done for a better future, one where people do not fall victim to manipulation and lies.

2.

An ongoing source of disinformation prior to Trump’s inauguration is the conspiracy theorist debate around the legitimacy of the 1969 Moon Landing. This theory follows the belief that America staged the moon landing to ensure they won the Space Race, to appear superior to the Soviet Union during the Cold War. A study conducted in 2022 found that a quarter of surveyed Europeans believe that the moon landing never happened, showing that disinformation has lingering effects (Science Business 2022). This example shows that disinformation can shape and solidify peoples opinions and ideologies, especially affecting people’s trust in government organisations. 

3.

Since Trump’s inauguration in 2016, disinformation has spread heavily through social media platforms, notably through Facebook and TikTok — and its impressionable young audience (Scientific American 2022). Living through the COVID-19 pandemic was a shared experience, one where isolation sent people online to reconnect. The hysteria surrounding COVID-19, as a result of its scale and the constantly shifting advice and information, meant that people were more susceptible to disinformation relating to COVID (Meppelink et al 2022). Widespread disinformation meant that people believed Coronavirus was a planted bioweapon for population control, or that the vaccines had trackers in them; creating a distrust in governments, scientists and causing people to disregard legal restrictions and health advice, having an exacerbating effect on the pandemics impact (Meppelink et al 2022).

 

References 

McIntyre L (2018) Post-Truth, MIT Press, Cambridge. 

Meppelink C, Bos L, Boukes M and Möller J (2022) ’A Health Crisis in the Age of Misinformation: How Social Media and Mass Media Influenced Misperceptions about COVID-19 and Compliance Behavior’, Journal of Health Communication, 27(10):764-775, DOI: 10.1080/10810730.2022.2153288

Science Business (2022) Conspiracy thinking: 25% of Europeans say the moon landing never happened, Science Business website, accessed 11 March 2024. https://sciencebusiness.net/news/covid-19/conspiracy-thinking-25-europeans-say-moon-landing-never-happened

Scientific American (2022) Experts Grade Facebook, TikTok, Twitter, YouTube on Readiness to Handle Election Misinformation, Scientific American website, accessed 12 March 2024. https://www.scientificamerican.com/article/experts-grade-facebook-tiktok-twitter-youtube-on-readiness-to-handle-election-misinformation1/

A5 pt2 Studio Review


Uncomfortable Filmmaking Studio Website Review

What’s Going On Alice?

Keira Gardener

What’s Going On Alice? is a short film by Keira Gardener that explores “how creepy live studio audiences and laugh tracks would be if they existed in real life” through the protagonist, Alice (Kiera Gardener 2023). The film follows a three act structure, with the first act showing the fake show format, the second part hinting at the cracks in Alice’s reality and the third act revealing the illusion of Alice’s reality through the addition of ‘the Audience Member’ — an unnamed member of the laughing track crowd that has materialised into Alice’s apartment, and thus into the show. 

In her reflection of the making process, Kiera noted how she “wanted to play around with the feeling of paranoia for the most part, [leaving] questions unanswered and endings unfinished” (Kiera Gardener 2023). In this sense, Kiera focused on creating discomfort in her film through the films content and form, as the film imbues the audience with a distorted sense of reality — like that of The Truman Show (1998) or Fractured (2019) — and lack of trust in the protagonist’s perception of reality. Additionally, the film is aware of itself as a form, and comments on this through text, laughing tracks and fourth wall breaks. 

A key affordance of the uncomfortable filmmaking approach is the ability to create discomfort through any means of breaking film conventions. Although this enabled any interpretation of creating an uncomfortable film, it also meant sometimes trying to fit too much in, thus undermining the effect of the film to make an audience uncomfortable. Kiera (2023) reflected that she “learnt that you can always add more, play around with footage in different ways, [and] experiment more, even when you think you are done. I also learnt however that you need to know when to stop adding and be happy with what you have created.”. Through experimentation we learn what works and what doesn’t in film, and that was a core lesson of this studio; to experiment beyond the finish line and try new things even when we are content with the current piece. Kiera was effective in communicating how experimentation was additive to the process of making What’s Going On Alice?, and how it meant that her final film was better as a result of this pushing of her boundaries.

Explore

Wren Hartley

Wren Hartley’s short film, Explore, is a mockumentary style fiction film — I think about an outsider arriving in Australia and being amused by the wildlife and culture here — presented through a robot voiceover and archival footage of animals, maps, Australia and the UK. The film intermittently cuts to a blue screen with a beeping noise drowning out the voiceover and ambient music. Throughout the film this continues, with no particular consistency or reasoning.  

In his reflection of the filmmaking process, Wren touches on a key ideology of the Uncomfortable Filmmaking studio that Kiralee instilled within us; that ‘uncomfortable filmmaking’ can simply be the discomfort of the creator in the making of their film. Discomfort in trying something new or unfamiliar, discomfort in the inability to make something, discomfort when things go wrong and how to adapt to that. I feel that without noticing it Wren experienced this discomfort in the making of his film, as he noted that: “after writing a few scripts and starting to put [the footage] together [the] idea wasn’t really working. I also had a big problem with time, and had to ditch my own filmed footage which I know will impact my mark but I think was the right decision in the end”. Additionally, Wren mentioned how there is a plot and meaning behind his film, yet upon viewing Explore, the content is not explicit to the viewers, or at least to me. This falls into the uncomfortable filmmaking experience, as as a viewer I am unsatisfied with the lack of palpable information that I can read into or understand. I think Wren was very successful in making an uncomfortable film, one that is simultaneously confusing and deeply humorous.

Real People, Real Lives Studio Website Review

Real People, Reel Lives conveyed the significance of film as a format to tell people’s stories through documentary style cinematography. Life in 35mm, a short film by Ellesha Atukorala, Karmen Pei and Yixuan Huang, follows the story of Yuci Zhang and her passion for film photography. Similarly, Jessie Rowes, Vallon, told the story of Tasmanian farmer Jo Lee’s passion for flower growing and her life living and working on her farm. Both stories are based on real people and their experiences, they are nonfiction and told through a singular perspective. In the case of Life in 35mm and Vallon, the documentary is about a sole person — “Real People”. Through this, their voice is given a platform, meaning that they are a co-creator and co-writer alongside the director, as they dictate the story of themselves that they tell. Unlike journalism or exploitative documentary, Real People, Reel Lives clearly aimed to portray the subject accurately and to their perception of themselves. The short form video style, or reel, underpins the content of the studio; the documentaries are short in length and thus more engaging for young audiences, they are sweet doses of reality. The ‘reel’ format also seems to dismiss the general ‘A roll and B roll’ structure of documentaries and has made the films more fast paced and textured. Both Life in 35mm and Vallon have a wide range of shot types, scenes and settings, yet they are cohesive and stick to one aesthetic. Overall, I think Real People, Reel Lives’ focus as a studio was to create short, engaging documentary style videos in co-creation with whomever they were making the film about, and based on Life in 35mm and Vallon, the studio succeeded in teaching and portraying that.

Life in 35mm

Ellesha Atukorala, Karmen Pei & Yixuan Huang

Vallon 

Jessie Rowe