UK Tech Firms and Child Safety Agencies to Test AI's Ability to Generate Abuse Images

Technology companies and child safety organizations will receive authority to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced British laws.

Significant Rise in AI-Generated Harmful Content

The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the authorities will permit approved AI companies and child safety groups to inspect AI models – the foundational systems for conversational AI and image generators – and verify they have adequate protective measures to prevent them from producing depictions of child exploitation.

"Ultimately about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the risk in AI models early."

Addressing Legal Challenges

The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is aimed at preventing that problem by enabling to stop the creation of those materials at their origin.

Legislative Framework

The amendments are being added by the government as revisions to the criminal justice legislation, which is also establishing a ban on owning, producing or sharing AI models designed to create child sexual abuse material.

Real-World Consequences

This recently, the official toured the London base of a children's helpline and heard a mock-up conversation to counsellors involving a report of AI-based abuse. The call portrayed a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.

"When I learn about children facing blackmail online, it is a source of intense anger in me and justified concern amongst families," he said.

Concerning Data

A prominent internet monitoring organization stated that instances of AI-generated abuse material – such as online pages that may contain numerous files – had more than doubled so far this year.

Cases of category A material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The law change could "represent a vital step to ensure AI tools are secure before they are launched," stated the chief executive of the internet monitoring organization.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving offenders the capability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and renders children, particularly female children, less safe on and off line."

Counseling Interaction Information

Childline also released information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:

  • Employing AI to evaluate weight, physique and looks
  • Chatbots dissuading young people from consulting trusted adults about harm
  • Being bullied online with AI-generated material
  • Digital extortion using AI-manipulated pictures

During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapy applications.

Lauren Tucker
Lauren Tucker

Lena is a passionate writer and philosopher who enjoys exploring the intersections of creativity and mindfulness in her work.