UK Tech Firms and Child Protection Officials to Test AI's Capability to Generate Exploitation Images
Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can generate child exploitation material under recently introduced British legislation.
Substantial Increase in AI-Generated Illegal Content
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will permit designated AI developers and child protection groups to examine AI systems – the underlying systems for conversational AI and image generators – and verify they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models early."
Addressing Legal Challenges
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by enabling to halt the creation of those materials at source.
Legislative Structure
The changes are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems designed to generate child sexual abuse material.
Practical Consequences
This week, the minister visited the London headquarters of a children's helpline and heard a mock-up call to counsellors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.
Alarming Data
A prominent online safety organization reported that instances of AI-generated exploitation content – such as webpages that may contain multiple files – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of illegal AI depictions in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring organization.
"AI tools have enabled so survivors can be targeted all over again with just a simple actions, giving offenders the ability to create potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which further exploits survivors' suffering, and renders young people, especially female children, more vulnerable both online and offline."
Support Interaction Information
Childline also published details of counselling interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Employing AI to evaluate body size, body and looks
- Chatbots dissuading children from consulting safe adults about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-faked pictures
During April and September this year, Childline conducted 367 support sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing using chatbots for support and AI therapeutic applications.