
New pilot shows way for smaller platforms to play big part in online safety
Even the smallest platforms can help prevent child abuse imagery online.
Published: Fri 21 Feb 2025
Hannah Swirsky, Head of Policy and Public Affairs at IWF, sets out why AI is an issue for anyone whose images appear online. Sidelining safety from global discussions and key legislation harms us all.
Last week, Heads of State from across the globe gathered in Paris for the AI Summit. Many proclaimed the opportunities that artificial intelligence provides, with some decrying ‘excessive regulation’.
Unfortunately, the critical issue of children’s safety was largely absent from the conversation. Instead, the focus was placed firmly on ‘action’ and the economic benefits that AI could provide.
This marks a departure from the 2023 AI Safety Summit, hosted by the UK. Ahead of the event, The Internet Watch Foundation (IWF) joined forces with the then Home Secretary to address the emerging issue of AI generated child sexual abuse imagery. As a direct result, 27 organisations, including TikTok, Snap, Stability AI and the governments of the US and Australia, signed a statement which affirmed that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.
As the UK’s front line against Child Sexual Abuse Material (CSAM), the IWF will always put the protection of children first. And this can run alongside, and indeed support, growth and innovation by ensuring safety is embedded.
Since we first started monitoring in early 2023, we’ve seen a rapid, frightening advancement in the ability to artificially generate child sexual abuse imagery. Reports of such content found online by the IWF have quadrupled in the past year. According to the National Police Lead for AI, the largest criminal use of AI is by offenders creating child abuse material.
Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models or altered to a more extreme category of abuse. One offender was seen to share links to fine-tuned models for 128 different victims of child sexual abuse. For each of those victims and survivors, they have had their abuse material reproduced on an unimaginable scale.
Suffering terrible sexual abuse from the ages of three to eight years old is ‘Olivia’ who our analysts have seen grow up in pictures and recordings online. Her abuse images are widely circulated on the internet, and now an AI model designed to generate new abuse images of Olivia is available to download for free.
AI-generated CSAM normalises abuse. Viewing child sexual abuse, including AI-CSAM, is proven to increase CSAM addiction, fuel existing thoughts of in-person child sexual abuse and even lead to initiating contact with a child.
Of the images we assessed last year (2024), around 75% include girls. Yet when it comes to AI generated CSAM, the percentage jumps to roughly 98%. Offenders are using AI primarily to create abusive images of girls. Tackling the misuse of technology, and particularly the spread of child sexual abuse material, is therefore vital to achieving the UK Government’s pledge to halve violence against women and girls.
Following our work in this area, the UK Home Secretary recently announced new landmark legislation designed to curb the rise of AI-generated CSAM.
The legislation, which will be put forward as part of the upcoming Crime and Policing Bill, will introduce a new criminal offence to possess, create or distribute AI models designed to generate CSAM. While it is already illegal to possess or create AI generated child sexual abuse material, the new offence will outlaw AI models that have been optimised to create this material. The legislation will also introduce a new criminal offence to possess AI manuals, which provide instructions on how offenders can modify content generation tools to generate child sexual abuse imagery.
The exact wording of the new offences is yet to be published. Parliament will then have time to scrutinise the legislation before they are given the opportunity to vote on the Bill and, hopefully, pass this legislation into law.
In the meantime, there are existing laws that provide some guardrails.
Generative AI tools and platforms will be regulated under the UK Online Safety Act if the site or app allows their users to interact with each other, for example by sharing their own Generative AI chatbots for use by others. As set out by Ofcom, requirements will include undertaking risk assessments, implementing proportionate measures to mitigate and manage those risks, and enabling users to easily report illegal posts and material that is harmful to children.
In the EU, the AI Act established a harmonized legal framework for the development and use of AI and introduces rules for general purpose AI models. The CSA Directive, which is currently going through the legislative processes, would, if passed, criminalise the production and dissemination of deepfakes and AI-generated material, as well as ‘paedophile manuals’. It is crucial that all forms of child sexual abuse online are criminalised and prosecuted, no matter how (or why) it was created.
We are grateful that work is underway to prevent the abuse and exploitation of emerging technology, but there is still work to do.
AI companies must prioritise child safety over profit. This includes through embedding safety by design features and rigorous testing to ensure models do not have the capabilities to generate CSAM.
We believe that, in the UK, the best vehicle for safeguards to prevent the generation of AI generated CSAM is the Government’s upcoming AI Bill.
AI companies can – and some already are – acting today to prevent the creation of child sexual abuse imagery.
By joining the IWF, Members are provided with access to a suite of cutting-edge tools developed to stop the spread of criminal videos and images on the internet. This includes our Image Hash List with over 2.7million unique hashes (digital fingerprints) of verified CSAM, which can be a vital tool in stopping known CSAM images being uploaded or used for training on AI tools. Other tools include our URL list and our Keywords List.
Additionally, if members of the public come across child sexual abuse imagery online – whether photographic or AI generated – they should report it to the IWF hotline immediately.
AI CSAM is not a problem to push to another day, or an issue which won’t affect the lives of ordinary people. It’s already here, and it’s already affecting us.
Our global leaders and AI developers can act, now, to ensure that AI can thrive without trading the safety of children.
Even the smallest platforms can help prevent child abuse imagery online.