Content Warning: This column contains mention of mature and sexual themes that may not be suitable for younger readers.
Even though she made Grammys history and was named Time’s Person of the Year, Taylor Swift just can’t catch a break.
In late January, a horrifying trend emerged intending to reduce her to an objectified pawn. The message board 4chan, which is known for spreading conspiratorial hate speech, facilitated a chatroom where participants eagerly used text-to-image AI. Their end goal with this technology was to create and share explicit images of the singer-songwriter. Compliments flowed within 4chan itself, and in some cases, the images spread and became viral in a matter of weeks. For example, one photo accrued 24,000 reposts, over 100,000 likes and 45 million views in a mere 17 hours before being taken down, while others, all marked with the hashtag “Taylor Swift AI” trended and were posted across multiple X accounts.
The rise and spread of explicit AI-generated deepfakes are tainted with a game-like, albeit sickening, nature, because, according to Graphika senior analyst Cristina López, “‘These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative A.I. products, and new restrictions are seen as just another obstacle to ‘defeat.'” This pseudo-competition wouldn’t even exist if it weren’t for the abuse of AI image generation with real lives at stake.
This deepfake pattern is further fueled by collaboration. As users of sites like 4chan and Telegram share tips with each other on how to evade restrictions, this pattern of abusing technology persists, until public figures are continuously defaced in the public eye. White House Press Secretary Karine Jean-Pierre said it best: “‘[Social media has] an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people.'”
Increased user freedom within the internet and access to its tools cannot come at the sacrifice of basic human decency.
These deepfakes do not only affect celebrities but also extend their influence to demographics as vulnerable as female minors. Prior to the Taylor Swift sweep, boys in southern Spain between the ages of 12 and 14 used an AI-powered platform to digitally depict real girls as being completely undressed. Over 20 female adolescents, some as young as 11 years old, had their dignity and privacy robbed from them because of AI programs they were completely unaware of, sparking well-deserved outrage around the world.
Child pornography is already a societal epidemic: Between 2008 and 2023, reports of child sexual abuse materials have risen by 15,000%. In 2021, the National Center for Missing and Exploited Children received 85 million pieces of content containing child sexual exploitation. The fact that this crisis has reached a new, digital plane further reinforces the need for a collective moral check-in: Is it right to undress, deface or create a lewd portrait out of someone from the comfort of your anonymity?
If your answer is not an immediate and resounding “no,” I encourage you to reflect on why and what your answer would mean for societal power dynamics: Whose individual rights are at risk if objectifying and disparaging photos can be created by an accessible technology for malicious gains in an instant? The truth of the matter is simple: Whether it is their bodies or their identities, no human is a chess piece — policies protecting their rights are not puzzles to be solved or games to be won.