My Take on the Taylor Swift Deepfake Scandal and AI Misuse

Our editor delves into the Taylor Swift deepfake controversy and how AI's misuse targets women

In a digital age where reality is increasingly blurred by the wizardry of AI, the recent saga involving Taylor Swift and her fabricated explicit deepfake images has highlighted the extent of the damage that is being done.

The dopplegangers, which are being widely circulated online, have ignited a call for stringent legal measures in the U.S. to outlaw the creation of deepfake content. It's high time we call it what it is: a digital-era crisis.

READ ME: AI Safety Summit: 5 Things You Need To Know

READ ME: Prepare for AI Babies: AI is revolutionising the success rate of IVF

When millions feasted their eyes on the bogus yet viral images of Swift on platforms like X and Telegram, it wasn't just a violation of privacy; it was the epitome of online misogyny. U.S. Representative Joe Morelle expressed deep concern over this incident, stressing the urgent need to address such digital abuses. X, one of the platforms where these images were shared, has been proactive in removing them and penalising the responsible accounts. Despite these efforts it was akin to trying to unspill milk, one of the fake images of Swift alarmingly amassed 47 million views before its removal. In response to this incident, search queries related to Taylor Swift and AI on X have been restricted.

“99% of deepfake targets in pornography are women” - The State of Deepfakes report

Joe Morelle said such images "can cause irrevocable emotional, financial, and reputational harm - and unfortunately, women are disproportionately impacted". His outcry over this issue is commendable, but it's just the tip of the iceberg in the fight against the gendered misuse of deepfake tech. Advancements in AI have made it easier and more affordable to create such harmful content.

Deepfake technology has now become the most powerful tool in a tech-savvy misogynist's arsenal. The startling spike in its use - a whopping 550% since 2019 - screams for a revolution in digital lawmaking. The UK seems to have caught the memo with its Online Safety Act, but the U.S. is still playing catch-up.

“We're not just battling technology; we're up against a deeply ingrained culture of exploiting women's images”

In the UK, distributing deepfake pornography was criminalised in 2023 as part of the Online Safety Act. Meanwhile, The State of Deepfakes report slams down a sobering statistic: 99% of deepfake targets in pornography are women. AI has been weaponised against a broad spectrum women and girls, from underage students at schools all the way through to Hollywood.

Last year over 20 girls, ranging in age from 11 to 17, emerged as victims of the tech misuse in Almendralejo, Spain. The psychological and emotional toll on these young victims is immeasurable. One parent, María Blanco Rayo, recounted "One day my daughter came out of school and she said Mum, there are photos circulating of me topless.'" The AI-generated naked child images shocked Spain, and highlighted the need for more regulation.

READ ME: Love in the age of AI: 2400% search increase for AI ...

READ ME: Dr. Miriam Al Adib lifted the lid on Almendralejo's AI Scandal ...

Representative Morelle's initiative to ban non-consensual deepfake pornography is a ray of hope, but it's like bringing a knife to a gunfight. "What's happened to Taylor Swift is nothing new," Democratic Rep Yvette D Clarke posted on X while Republican Congressman Tom Kean Jr said its "clear that AI technology is advancing faster than the necessary guardrails". Both Yvette D Clarke and Tom Kean Jr acknowledged the grim reality - women have been the prime target of this AI abuse for far too long.

“Women are often left to navigate the aftermath of such violations in solitude.”

Michael Drury, Of Counsel at BCL Solicitors, is a leading expert on surveillance, information law and cybercrime. He tells The Modems, "There is no direct law prohibiting the sharing of ‘deep fakes’  unless those images are pornographic. In that case, the recently created offences under the controversial Online Safety Act 2023 will mean that a crime has been committed as long as the person whose image is shared (real or fake) has not consented and the person sharing does not believe they have consented,“ however he also stipulates,  "There is no direct civil wrong allowing the person said to be shown in the image to sue.  For those in the same position as Taylor Swift, the obvious solution is to rely upon copyright of one's image (if copyrighted); a breach of privacy or data protection laws; harassment (as a civil wrong); perhaps defamation; or criminal law more generally. There may be some fraud involved, but aside from a private prosecution - which for good reason is not exactly the flavour of the month presently - there is a real question as to what priority this may be given by law enforcement.”

It underscores a fundamental issue: we're not just battling technology; we're up against a deeply ingrained culture of exploiting women's images.

Taylor Swift's silence on the matter speaks volumes. Behind the legal strategising and media reports lies a stark reminder: women are often left to navigate the aftermath of such violations in solitude.

Taylor’s deepfake debacle is more than just a cautionary tale about AI's dark side; it's a spotlight on the urgent need for an overhaul in how we approach digital content and legislation, and the necessity of protection of women's rights and dignity in the digital realm.

Previous
Previous

Elon Musk's Neuralink Experiments Lead to Over 1,500 Animal Deaths

Next
Next

Pixel 8 Pro Review: Sky-High Zooms, Majorelle Blues and AI Magic in Marrakech