A1 Post 3:  Reflection on Deadpool Kate Middleton

For my A1 Post 2, I created four images of Kate Middleton; one of her with actor Hugh Jackman, and three depicting her as the Marvel character Deadpool. I used the website HotPot to generate these Ai images, which uses the OpenAi program. The HotPot Ai image generator is free and I was able to choose the image style, put in a small prompt and the images popped up within seconds. I had to tweak the prompts a few times to get the image I was desiring, as sometimes the Ai ignored my request all together or only produced an image for half of the prompt, but for the most part it was adequate. 

Given the nature of this task, I thought these images would be sufficient and that the generator served its purpose, though I do not think these photos would disinform most people if presented without a disclaimer. I think the images are simply not visually coherent, nor thematically conceivable, and would not convince anyone who is not visually impaired or anyone who knows anything about the Royal Family and Marvel Studios. The images are highly satirical and I feel that that is obvious; a disclaimer should not have to reassure that for most people. I think it’s plausible that some people could believe these images if they had poor vision, a disinterest in current affairs and/or poor online literacy, and there are definitely people who fall into these categories online because of the internet’s accessibility; most people are online.  

Ai generators are so obtainable and user friendly, and I think these factors make it impossible to ensure people won’t use this software for inciting harm or spreading disinformation, especially about public figures and about the ongoing wars/invasions happening globally. In such a polarised era, it seems apparent that people would use – and are using – Ai generators to fabricate evidence that backs up their opinion or agenda. We have seen this in the wake of the invasion of Palestine and war in Ukraine; people – hackers, governments, businesses –  falsifying existing images for propaganda or public disillusionment (see Inside the Kremlin’s Year of Ukraine Propaganda). Deepfakes have been used to depict country leaders stating something, converse to their genuine view, that aids their oppositions; showing that misuse of Ai generated content is capable of inflicting great harm (Bergengruen 2023). 

In creating any fictional art of others, I feel there can be ethical concerns. Though, in the case of Ai generated artefacts, I think the content is more realistic and thus holds more weight (potential impact). A concern therefore would be the real world effects, of sharing the artefact, on the people/object depicted in the work. The content could even fall into invading the privacy of the person, or defamation, if they were to take offence. In that sense, another concern would be properly attributing the use of Ai to the creation of the image, and making it clear the artefact is a work of fiction; this can be done through a disclaimer and clear referencing to the software/author used. 

References

Bergengruen V (2023) Inside the Kremlin’s Year of Ukraine Propaganda, Time website, accessed 20 March 2024. https://time.com/6257372/russia-ukraine-war-disinformation/

Leave a Reply

Your email address will not be published. Required fields are marked *