The Government is cracking down on AI-generated child sexual abuse imagery following an Internet Watch Foundation (IWF) campaign to tighten up regulations.
Today (February 2), Home Secretary Yvette Cooper announced a raft of new legislation designed to curb the rise of life-like child sexual abuse material generated by AI.
The new rules will outlaw the possession and distribution of AI models that have been optimised to create child sexual abuse imagery, and will also criminalise the possession of manuals which provide instructions on how offenders can use AI to generate child sexual abuse imagery.
As the UK’s front line against child sexual abuse imagery online, the IWF has welcomed the announcement, but says further steps are needed to clamp down on the abuse of this new technology.
The IWF was among the first to sound the alarm about the spread of AI and synthetic child sexual abuse imagery, and has long been campaigning for these measures to be introduced.
Currently, loopholes in UK law make it too easy for criminals to create potentially limitless amounts of realistic child sexual abuse imagery, often offline, without detection.
The announcement comes as newly released IWF data shows reports of AI-generated child sexual abuse imagery found online by the IWF have quadrupled in a year.
IWF analysts confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% rise. Of these reports, 193 involved imagery that was so realistic it had to be treated exactly the same as ‘real’ photographic imagery of child sexual abuse.
Derek Ray-Hill, Interim Chief Executive of the IWF, said: “We have long been calling for the law to be tightened up, and are pleased the Government has adopted our recommendations. These steps will have a concrete impact on online safety.
“The frightening speed with which AI imagery has become indistinguishable from photographic abuse has shown the need for legislation to keep pace with new technologies.
“Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models. It is a nightmare scenario, and any child can now be made a victim, with life-like images of them being sexually abused obtainable with only a few prompts, and a few clicks.
“The availability of this AI content further fuels sexual violence against children. It emboldens and encourages abusers, and it makes real children less safe. There is certainly more to be done to prevent AI technology from being exploited, but we welcome today’s announcement, and believe these measures are a vital starting point.”