
A one-report system aims to quickly stop the re-uploading of abusive images.
The UK government wants online platforms to remove non-consensual intimate images (NCII), including revenge posts and AI-generated explicit deepfakes, within 48 hours of a report.
The proposal, via an amendment to the Crime and Policing Bill, includes penalties of up to 10% of qualifying worldwide revenue and, for persistent non-compliance, potential UK blocking. Ministers also want NCII treated as a “priority offence” under the Online Safety Act, and plan guidance for internet providers on blocking rogue sites that host this material.
The hardest part of NCII is often the “whack-a-mole” aspect – the same image is reported across multiple sites, only to resurface elsewhere.
The new approach aims to reduce that burden by allowing victims to flag once and trigger action across platforms.
Regulators are also weighing proactive “hash matching” (digital fingerprints) so known abusive images can be detected and removed automatically when someone tries to repost them.
Generative AI is making image-based abuse cheaper and faster. Ofcom has been investigating X over misuse of its Grok chatbot, while EU regulators are also scrutinizing X under the Digital Services Act.
This could mean that takedown rules will increasingly be judged with AI-related risks in mind.
If you’re targeted, capture evidence first: screenshots, URLs, and timestamps (content can disappear and reappear).
Then use platform tools to report “non-consensual nudity” or “intimate image abuse,” and consider contacting police if you’re in the UK.
If reporting feels overwhelming, seek specialist support and ask platforms for escalation paths. Some advocates argue that 48 hours is still too slow when harm spreads in minutes.
tags
Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.
View all posts