UK Technology Firms and Child Safety Agencies to Examine AI's Capability to Create Exploitation Images
Tech firms and child protection organizations will receive permission to assess whether artificial intelligence systems can generate child exploitation images under new UK legislation.
Significant Rise in AI-Generated Harmful Material
The declaration came as revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will permit designated AI companies and child protection organizations to inspect AI systems – the underlying systems for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the danger in AI systems early."
Addressing Legal Challenges
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by helping to stop the creation of those materials at source.
Legal Framework
The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or distributing AI systems designed to generate child sexual abuse material.
Real-World Impact
This week, the minister toured the London headquarters of Childline and heard a mock-up conversation to counsellors involving a report of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of extreme anger in me and justified anger amongst families," he stated.
Alarming Data
A prominent online safety organization stated that cases of AI-generated exploitation content – such as webpages that may contain multiple images – had significantly increased so far this year.
Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are launched," commented the chief executive of the online safety foundation.
"AI tools have enabled so victims can be targeted repeatedly with just a few clicks, giving criminals the ability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally exploits survivors' suffering, and makes children, particularly female children, more vulnerable on and off line."
Counseling Session Information
Childline also released details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:
- Employing AI to rate weight, body and appearance
- Chatbots discouraging children from consulting safe guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked images
Between April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including utilizing AI assistants for support and AI therapy applications.