Ian Critchley, National Police Chiefs’ Council Lead for Child Protection, said: “In the last five years the volume of online child sexual abuse offending has rapidly increased, with new methods and ways of offending being discovered on a regular basis.
“As police lead I have been working with the IWF - a world leader in this area - together with partners, and law enforcement colleagues, to understand the impact of what we have been calling ‘the emerging threat’ of Artificial Intelligence.
“It is clear that this is no longer an emerging threat – it is here, and now. We are seeing an impact on our dedicated victim identification officers, who seek to identify each and every real child that we find in this abhorrent material. We are seeing children groomed, we are seeing perpetrators make their own imagery to their own specifications, we are seeing the production of AI imagery for commercial gain – all of which normalises the rape and abuse of real children.
“AI has many positive attributes, and we are developing opportunities with partners like the IWF, Government and industry to turn this technology against those who would abuse it to prey on children.
“Together we continue to work at pace to ensure that industry prevents these appalling images being created, shared and distributed on their platforms and that we identify and bring to justice the abhorrent offenders who seek to abuse children. It is also why the Online Safety Act is the most important piece of legislation in many years; to ensure the safety of all children from abusive and harmful material, an increasing number of which is AI generated.”
The increased availability of this imagery also poses a real risk to the public and serves to normalise sexual violence against children.
The IWF has discovered online manuals dedicated to helping criminals fine tune AI image generators to produce more realistic imagery.
Now, with criminals using real children as models for AI image generation, analysts say new imagery can be created at the click of a button.
IWF Internet Content Analyst Alex, said: “The IWF has been aware for a long time of the tendency among perpetrator communities to collect content featuring their preferred child sexual abuse victims. Perpetrators have favourite victims, share content featuring that child, and look for more.
“Now, perpetrators can create a model and generate as many new images of that victim as they like.
“These models are comparable to 3D models insofar as they aim to reproduce the likeness of that victim as closely as possible, and retain the flexibility to transpose generated characters into any setting; any scenario; any type of activity.”
Deborah Denis, Chief Executive of The Lucy Faithfull Foundation, a charity which helps people concerned about their sexual thoughts, feelings, or actions towards children, to stop and address their behaviour, said: "AI-generated sexual images of children are a threat that needs to be tackled with the utmost urgency. AI tools are being used to make huge amounts of sexual images of children, and we find ourselves at the start of an epidemic.
“Some people might try to justify what they’re doing by telling themselves that AI-generated sexual images of children are less harmful, but this is not true. Viewing such images reinforces the attraction to things that are illegal and abusive.
“AI technology is evolving at a rapid rate and so are the risks to children. AI companies must put child safety front and centre. Politicians, law enforcement, regulators and all those responsible for protecting children must work together to ensure that the technology is properly regulated, and AI-generated sexual images of children are removed as a top priority. It is not up to children to protect themselves.”