UK Tech Companies and Child Protection Officials to Test AI's Ability to Generate Exploitation Content
Tech firms and child protection agencies will be granted permission to assess whether AI systems can produce child exploitation images under recently introduced British legislation.
Substantial Increase in AI-Generated Illegal Material
The declaration came as revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the authorities will allow designated AI companies and child safety organizations to examine AI models – the foundational technology for chatbots and image generators – and verify they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about stopping exploitation before it happens," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."
Addressing Legal Challenges
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by enabling to halt the creation of those images at source.
Legislative Structure
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI models developed to generate child sexual abuse material.
Real-World Impact
This recently, the minister toured the London headquarters of a children's helpline and heard a mock-up call to counsellors featuring a report of AI-based abuse. The call depicted a teenager seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I hear about young people experiencing blackmail online, it is a source of extreme frustration in me and rightful concern amongst families," he said.
Concerning Statistics
A leading online safety organization stated that instances of AI-generated exploitation material – such as webpages that may include numerous images – had significantly increased so far this year.
Instances of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a vital step to ensure AI products are safe before they are launched," commented the head of the internet monitoring organization.
"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to create potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which further exploits survivors' suffering, and renders children, particularly female children, more vulnerable both online and offline."
Support Interaction Information
Childline also published details of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Using AI to rate body size, body and looks
- Chatbots discouraging children from talking to trusted adults about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated images
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including utilizing chatbots for support and AI therapy applications.