British Technology Companies and Child Protection Officials to Test AI's Ability to Create Abuse Images

Tech firms and child protection organizations will receive authority to evaluate whether AI systems can produce child exploitation material under recently introduced UK laws.

Substantial Rise in AI-Generated Harmful Material

The declaration coincided with findings from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the government will allow approved AI developers and child protection groups to inspect AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating images of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now detect the risk in AI models promptly."

Tackling Legal Obstacles

The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that issue by enabling to halt the production of those materials at their origin.

Legal Framework

The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a ban on possessing, producing or sharing AI models developed to create exploitative content.

Practical Impact

This recently, the minister visited the London headquarters of Childline and heard a simulated call to counsellors featuring a account of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit deepfake of themselves, created using AI.

"When I learn about children experiencing extortion online, it is a cause of intense anger in me and justified concern amongst parents," he said.

Alarming Statistics

A leading online safety organization stated that cases of AI-generated abuse material – such as online pages that may contain numerous files – had significantly increased so far this year.

Cases of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, making up 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a vital step to ensure AI products are safe before they are released," stated the head of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, giving offenders the ability to make possibly limitless amounts of advanced, lifelike exploitative content," she added. "Content which additionally commodifies victims' trauma, and renders young people, particularly girls, more vulnerable both online and offline."

Support Interaction Data

The children's helpline also released details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Employing AI to rate body size, physique and appearance
  • AI assistants dissuading young people from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked images

Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for assistance and AI therapy apps.

Anthony Green
Anthony Green

A passionate gamer and tech writer with over a decade of experience covering video games and emerging trends in interactive entertainment.