It usually starts quietly. A photo you didn’t take. A video you never consented to. Your face placed on a body that isn’t yours, sexualised and circulated as if it were public property. Until recently, the law treated this kind of abuse as a grey area. Now, the UK government is finally trying to close it.
The government has confirmed it is fast-tracking legislation to criminalise the creation of non-consensual sexual deepfakes — not just their distribution. That distinction matters. Until now, women were often forced to prove harm after the fact, chasing platforms to remove content and explaining why something “not real” felt so violating. This shift acknowledges that the damage happens the moment the image is made.
Ministers have been unusually blunt about why the law is moving quickly. Government officials have described sexual deepfakes as a “vile” form of abuse and acknowledged that AI tools are being used to humiliate, control and silence women. The message is clear: this is not innovation gone slightly wrong — it’s exploitation.
The statistics support that urgency. Research consistently shows that deepfake abuse is overwhelmingly gendered. Studies of online deepfake content suggest that around 96 per cent of deepfakes are pornographic, and the vast majority depict women. Police-commissioned research also indicates that roughly six in ten people are worried about having a deepfake made of them, with women far more likely to report fear and anxiety around this kind of image abuse. And yet, cases remain heavily under-reported, partly because many women don’t trust the system to respond meaningfully.
What makes deepfakes particularly insidious is how little material is needed. There doesn’t have to be an intimate photo in the first place. A LinkedIn headshot, an Instagram post, a screenshot from a video call is enough. Visibility becomes vulnerability. The law’s focus on creation reflects that reality.