#3: The Emergence of AI and Disinformation

 

The TRUTH BE TOLD course has opened my eyes to the emergence of Artificial Intelligence and the prevalence of disinformation, something I was blissfully ignoring before. I chose this class because I knew that, even though the topic unsettles me, it is something that we all should be aware of, especially as media makers — creating media at the time that it is all developing. 

All eyes on (an AI-generated pic of) Rafah - triple j

Recently, AI art has emerged online in relation to the genocide in Palestine. Instagram users have been sharing to their stories an AI generated image of a Palestinian mass displacement camp, in which nondescript bodies in bags spell out “All Eyes on Rafah”. Although it is important for people to be talking about Palestine and spreading awareness, I think there are problems associated with linking support to an AI image. There have been so many videos, photos, and stories that have been shared by Palestinians and journalists that show the real horror of what is happening in Palestine, and yet the image that has gained the most traction is one that does not depict any of it. An image that shows no humanity, nor reality, one that is palatable to all. It is scary to know that people would rather sterilise this situation to an AI image when Palestinians have to see the real, traumatising scenes all day, everyday. Hence my fear with AI is that it will remove humanity from news sectors and online spaces. Which would have an impact on people’s opinions and attitudes towards things that do not immediately affect them, thus negatively impacting our society as a whole.  This emergence of AI into political spaces would also affect the credibility of news sources, which could result in a lack of public trust (something I feel is already declining). 

I predict that the effects of AI and disinformation will only get worse before they get better, as AI is still rapidly evolving and there is no cease of progression in sight. I predict that the worst of the negative effects will be in relation to politics, public matters, pop culture, and scamming.  These predictions are in relation to disinformation and AI content that has already been published and had real world effects. Think: The Voice Referendum, Trumps claims and the effects on the American Democratic processes, COVID, the genocide in Palestine, and even novel examples like the Kate Middleton ‘disappearance’. 

I predict that the prevalence of AI online will result in more regulation online by means of public disapproval. I think it’s so easy to get lost and confused online, and I think these effects on public trust will become apparent over the next few years, especially as AI video and audio generators become more user-friendly, advanced, and readily available. 

In her lecture, Sushi Das shared that one of the issues with fact checking online is the scope of information/posts being published. I predict that, through a desire to use AI for good, there will be more human and AI collaboration. My hope with this is that AI could be used as a tool to mass fact check online platforms and alert users to posts that do, or could, contain misinformation, disinformation or AI. Using AI to negate AI, that’s the dream. 

I think there is real importance in Dr. Elkins simple message to ask ‘why’ and to be open to different points of view through having difficult conversations (personal communication, May 8 2024). My hope is that more conversations surrounding the effects of AI and disinformation would result in public resistance to AI and disinformation, leading towards a social shift. I hope that this shift would involve enhanced media literacy, more regulation on social media platforms, distinct labelling of AI contributions/creations, and a movement towards humanism within online spheres. 

Sources

@Shahv4012 (2024) All Eyes On Rafah [AI generated image], Instagram website, accessed 30 May 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *