Decoding AI post #2

Blog post #2

After being introduced to the ARC Centre for ADM+S, a project I saw on their website that I found interesting is the Automated Content Regulation (Sexuality Education and Health Information) project. We’ve discussed this topic briefly in class and I hadn’t thought too much about this issue. ADM and AI are often used on social media platforms to regulate inappropriate content. This software is programmed to report content that doesn’t meet community guidelines, yet often posts containing sexual health education and information are flagged and deemed to violate the guidelines. This project involved talks with researchers associated with sexual justice, who spoke about their experiences with sexual health and education information being moderated on social media platforms. I find this topic interesting, as using social media is an effective way to communicate helpful information to young adults, (especially in the area of sexual health that is particularly relevant to them), yet ADM and AI often deem this to be inappropriate. Like we have discussed in class, AI is built by humans and therefore has human biases, so it is interesting to see how humans can train AI to recognise content that is helpful and not deem it as taboo, when humans ourselves are still learning this. Furthermore, this project also discussions on gender identify and nudity. I find this particularly interesting as there are many misogynistic biases regarding social media content regulation. These can include content about breastfeeding, artwork, fashion, and gender equality in general. Women’s bodies are continuously reported on social media, training AI with even more discriminatory biases.

Secondly, the connection between ADM and AI and human discrimination is something that I hadn’t previously considered. Of course, ADM and AI can be discriminatory and biased, because it is developed by humans, yet I hadn’t really thought about it until our class discussions and through the studio learnings. ADM and AI make assumptions based on people’s demographics. For example, an algorithm that assessed insurance risks charged residents from a postcode with a mostly Caucasian popultion 30% less than postcodes where most residents are from minority ethnicities (Mendoza et al, 2020). People may assume that artificial intelligence would be completely neutral with no ’personal’ values or biases, yet it is programmed by humans with human flaws.

 

Mendoza B, Szolllosi M, and Leiman T (2020) ‘Automated decision making and Australian discrimination law’, Computers and Law: Journal for the Australian and New Zealand Societies for Computers and the Law’, accessed on 6 August 2024. https://classic.austlii.edu.au/au/journals/ANZCompuLawJl/2021/4.html#fn1

ARC Centre for ADM+S. ‘Automated Content Regulation (Sexuality Education and Health Information). https://www.admscentre.org.au/automated-content-regulation-sexuality-education-and-health-information/

Leave a Reply

Your email address will not be published. Required fields are marked *