AI generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing even more people to the harmful and horrific imagery, says the Internet Watch Foundation (IWF).
Many of the images and videos of children being hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children and are regarded as criminal content in the eyes of UK law, much in the same way as ‘traditional’ child sexual abuse material would be1.
In the past six months alone, analysts at the IWF have seen a 6% increase in confirmed reports containing AI generated child sexual abuse material, compared with the preceding 12 months2.
The IWF, Europe’s largest hotline dedicated to finding and removing child sexual abuse imagery from the internet, is warning that almost all the content (99%)3 was found on publicly available areas of the internet and was not hidden on the dark web.
Most of the reports have come from members of the public (78%)4 who have stumbled across the criminal imagery on sites such as forums or AI galleries. The remainder were actioned by IWF analysts through proactive searching.
Analysts say that viewing AI generated content of children being sexually abused can be as distressing as seeing real children in abuse imagery if a person is not prepared or trained to cope with seeing such material.
Some AI child sexual abuse material is classed as non-photographic imagery, such as cartoons, and is also regarded as harmful to view and accordingly assessed by IWF analysts1.
The IWF traces where child sexual abuse content is hosted so that analysts can act to get it swiftly removed.
More than half of the AI generated content found in the past six months was hosted on servers in two countries, the Russian Federation (36%) and the United States (22%), with Japan and the Netherlands following at 11% and 8% respectively5.
Addresses of webpages containing AI generated child sexual abuse images are uploaded on to the IWF’s URL list which is shared with the tech industry to block the sites and prevent people from being able to access or see them.
The AI images are also hashed – given a special unique code like a digital fingerprint – and tagged as AI on a Hash List of more than two million images which can be used by law enforcement in their investigations.
Jeff6, a Senior Internet Content Analyst at the IWF, said: “This criminal content is not confined to mysterious places on the dark web. Nearly all of the reports or URLs that we’ve dealt with that contained AI generated child sexual abuse material were found on the clear web.
“I find it really chilling, as it feels like we are at a tipping point and the potential is there for organisations like ourselves and the police to be overwhelmed by hundreds and hundreds of new images, where we don’t always know if there is a real child that needs help.”
Sadly, images and videos of real victims are being used by perpetrators to generate some of the imagery as the AI technology allows any scenario imagined to be brought to life.
IWF analysts are continuing to see imagery of existing child victims used in new AI images and new ‘deepfake’ videos of abuse, as previously reported by the IWF7. Many of these are adult pornography videos which have been manipulated with AI tools.
Perpetrators are also using AI to turn known ‘softcore’ sets of child sexual abuse imagery, which is less explicit, into new, more extreme examples of content.
Survivors have told the IWF how traumatising it is for images of their abuse to continue to be circulated and used online, as it impacts on their ability to heal and move on from their ordeal.
Derek Ray-Hill, Interim Chief Executive Officer at the IWF, said: “People can be under no illusion that AI generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.
“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.
“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit. Recent months show that this problem is not going away and is in fact getting worse. We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.”
Assistant Chief Constable Becky Riggs, Child Protection and Abuse Investigation Lead at the National Police Chiefs’ Council, said: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children.
“Law enforcement is committed to finding and prosecuting online child abusers, wherever they are. Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI generated imagery.
“While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people. This includes and brings into sharp focus those companies responsible for the developing use of AI and the necessary safeguards required to prevent it being used at scale, as we are now seeing.
“We continue to work closely with the National Crime Agency, government and industry to harness technology which will help us to fight online child sexual abuse and exploitation.”