Content Warning: discussions of sex crimes and child sexual abuse
While the prevalence of artificial intelligence (AI) use amongst young adults and students is undoubtedly controversial, most discourse glosses over the sinister role AI has played in facilitating image-based abuse, particularly as it targets young adults and children.
Ongoing debates regarding the impact of AI on climate change, intellectual property and cognitive bias, to say the least, are already omnipresent online. However, more dialogue is needed on how certain AI programmes have made abusive and non-consensual forms of pornography more accessible. ESafety reports have revealed the use of generative AI to create synthetic, yet hyper-realistic and sexually explicit content (deepfakes) has doubled in the past 18 months, and an estimated 99 per cent of deepfake victims are women and girls.
Existing cultures of sexual violence are already a concern amongst Australian youth and adults–disproportionately impacting women–with non-consensual sexting, sharing of nude images and sexual assault being disturbingly prevalent. With deepfakes becoming increasingly convincing and difficult to detect online, the nature of sexual abuse online is shifting, enabling new kinds of sexual bullying, extortion and violence.
Uploading innocent images of people to deepfake applications can instantaneously “nudify” or undress them, and is used maliciously to create non-consensual pornographic content. Frighteningly, these tools are lucrative for tech companies and highly accessible, putting women and girls at a serious risk. There are already significant cases of deepfake image-based abuse targeting female students in Melbourne, so it is more critical than ever to remain aware of the “real and irreparable” damage they cause, according to Minister for Communications Anika Wells.
Albanese Government’s Ban on “Nudify” Apps
Whilst sharing or threatening to share non-consensual sexually explicit deepfake content is illegal, the development and promotion of ‘nudify’ tools is currently not a criminal offence. A 2 September media release from the Albanese government entails action taken to counter this concern. Working alongside the International Centre for Missing and Exploited Children, Wells has announced a proactive initiative to “stop it at the source.” The ban on ‘nudify’ tools will place the onus on tech companies to restrict access to tools used to create sexually explicit imagery, in a similar manner to the upcoming amendments to the Online Safety Act which restrict social media use for children under 16 and prevent them from having harmful conversations with AI chatbots.
Although this restriction is a step forward towards better online safety, there are considerable limitations of the ban, with Wells acknowledging that the move “won’t eliminate the problem of abusive technology in one fell swoop.” This initiative effectively facilitates restrictions for large-scale tech companies with “in-house” AI applications (such as OpenAI’s ChatGPT), as they can detect and deny inappropriate requests. However, open-source AI models, which can be developed and trained by anyone, pose a greater risk as it will be more difficult to regulate their production of sexually-explicit content.
Due to the potential limitations of the ban, and the AI industry constantly changing and adapting to technological developments and pressures, educators and authorities must advocate for increased digital literacy and education surrounding the use of AI from an ethical and consent-focused standpoint.
Impacts of Deepfake Image-Based Abuse
Deepfake image-based abuse drastically alters how young people navigate sexually-explicit material online. What is sometimes known as “revenge porn” is already a serious concern amongst teenagers and young adults. According to a 2019 RMIT study, one in three Australians have experienced sexual or nude images taken and shared without their consent. Unfortunately, deepfake applications only facilitate the manufacture and sharing of non-consensual sexual imagery.
Whilst the material produced by these applications may not be real, image based abuse has serious real-life implications for victims. It disproportionately impacts women, Aboriginal and Torres Strait Islander people, the LGBTQIA+ community and those with a disability. Moreover, those who experience image-based abuse are almost twice as likely to experience symptoms of anxiety, depression and psychological distress. Ultimately, this can lead to social withdrawal, and can disrupt relationships with friends and family, negatively impacting one’s own self-perception and body image.
Consequently, it’s absolutely critical to recognise the harms of deepfake image-based abuse now. Increased regulations and banning of these applications is a step in the right direction, but combatting the psychological consequences of image-based abuse begins with advocacy and support for those dealing with it.
Safety Resources for Sexual Harm
It can be scary to speak up if you’ve been impacted by deepfake image-based abuse. Yet, the more awareness raised for the abuse of these applications, the more resources exist for those struggling with social stigmas, and mental and physical distress.
According to the Office of the eSafety Commissioner, the best way to deal with image-based abuse is to immediately collect evidence and report it to their online harm portal–which can investigate reports, and assist with the removal of images, legal and regulatory action. The portal also offers further information and resources for anyone who is concerned about the impact of image-based abuse.
The University of Melbourne additionally offers support for sexual harm through the Safer Community Program, with the Speak Safely Portal and Counselling & Psychological Services.
Telephone support for sexual harm can be accessed through 1800RESPECT on 1800 737 732, Lifeline Australia on 13 11 14, or Kids Helpline on 1800 55 1800.