Public exposure to ‘chilling’ AI child sexual abuse images and videos increases

Published:  Fri 18 Oct 2024

AI generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing even more people to the harmful and horrific imagery, says the Internet Watch Foundation (IWF).  

Many of the images and videos of children being hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children and are regarded as criminal content in the eyes of UK law, much in the same way as ‘traditional’ child sexual abuse material would be1. 

In the past six months alone, analysts at the IWF have seen a 6% increase in confirmed reports containing AI generated child sexual abuse material, compared with the preceding 12 months2. 

The IWF, Europe’s largest hotline dedicated to finding and removing child sexual abuse imagery from the internet, is warning that almost all the content (99%)3 was found on publicly available areas of the internet and was not hidden on the dark web. 

Most of the reports have come from members of the public (78%)4 who have stumbled across the criminal imagery on sites such as forums or AI galleries. The remainder were actioned by IWF analysts through proactive searching. 

Analysts say that viewing AI generated content of children being sexually abused can be as distressing as seeing real children in abuse imagery if a person is not prepared or trained to cope with seeing such material. 

Some AI child sexual abuse material is classed as non-photographic imagery, such as cartoons, and is also regarded as harmful to view and accordingly assessed by IWF analysts1. 

The IWF traces where child sexual abuse content is hosted so that analysts can act to get it swiftly removed. 

More than half of the AI generated content found in the past six months was hosted on servers in two countries, the Russian Federation (36%) and the United States (22%), with Japan and the Netherlands following at 11% and 8% respectively5. 

Addresses of webpages containing AI generated child sexual abuse images are uploaded on to the IWF’s URL list which is shared with the tech industry to block the sites and prevent people from being able to access or see them. 

The AI images are also hashed – given a special unique code like a digital fingerprint – and tagged as AI on a Hash List of more than two million images which can be used by law enforcement in their investigations. 

Jeff6, a Senior Internet Content Analyst at the IWF, said: “This criminal content is not confined to mysterious places on the dark web. Nearly all of the reports or URLs that we’ve dealt with that contained AI generated child sexual abuse material were found on the clear web.  

“I find it really chilling, as it feels like we are at a tipping point and the potential is there for organisations like ourselves and the police to be overwhelmed by hundreds and hundreds of new images, where we don’t always know if there is a real child that needs help.”  

Sadly, images and videos of real victims are being used by perpetrators to generate some of the imagery as the AI technology allows any scenario imagined to be brought to life. 

IWF analysts are continuing to see imagery of existing child victims used in new AI images and new ‘deepfake’ videos of abuse, as previously reported by the IWF7. Many of these are adult pornography videos which have been manipulated with AI tools. 

Perpetrators are also using AI to turn known ‘softcore’ sets of child sexual abuse imagery, which is less explicit, into new, more extreme examples of content. 

Survivors have told the IWF how traumatising it is for images of their abuse to continue to be circulated and used online, as it impacts on their ability to heal and move on from their ordeal.  

Derek Ray-Hill, Interim Chief Executive Officer at the IWF, said: “People can be under no illusion that AI generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online. 

“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.  

“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit. Recent months show that this problem is not going away and is in fact getting worse. We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.” 

Assistant Chief Constable Becky Riggs, Child Protection and Abuse Investigation Lead at the National Police Chiefs’ Council, said: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children. 

“Law enforcement is committed to finding and prosecuting online child abusers, wherever they are. Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI generated imagery.

“While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people. This includes and brings into sharp focus those companies responsible for the developing use of AI and the necessary safeguards required to prevent it being used at scale, as we are now seeing. 

 “We continue to work closely with the National Crime Agency, government and industry to harness technology which will help us to fight online child sexual abuse and exploitation.” 

 

Editors’ notes: 

1AI-generated child sexual abuse material in the UK falls under two different laws:  

The Protection of Children Act 1978 (as amended by the Criminal Justice and Public Order Act 1994). This law criminalises the taking, distribution and possession of an “indecent photograph or pseudo-photograph of a child”.  

The Coroners and Justice Act 2009. This law criminalises the possession of “a prohibited image of a child”. These are non-photographic – generally cartoons, drawings, animations or similar. 

 2 From April 2023 to March 2024, 70 reports of AI CSAM were actioned. From April 2024 up to the end of September 2024, 74 reports were actioned. 

3From April 2024 to end September 2024, 58 reports of AI generated CSAM actioned by the IWF were from the public. IWF analysts actioned 16 reports through proactive work. 

4Out of 74 reports from April 2024 to end September 2024, 73 were found on the clear web. 

5Global hosting figures from April 2024 until end September 2024.

 

Apr 24 to Sept 24 

 

 

Country 

Number of sites with AI CSAM/Prohibited imagery 

 

Russian Federation 

27 

 

United States 

16 

 

Japan 

8 

 

Netherlands 

6 

 

Sweden 

7 

 

Germany 

1 

 

France 

3 

 

Ukraine 

1 

 

Hong Kong 

1 

 

Indonesia 

1 

 

Latvia 

1 

 

Onion URL (Hidden Service) 

1 

 

Luxembourg 

1 

 

Total 

74 

 

 

6Jeff is a pseudonym used to protect the analyst’s privacy. 

7What Has Changed in the AI CSAM Landscape: AI CSAM Report Update, Internet Watch Foundation, July 2024: https://admin.iwf.org.uk/media/nadlcb1z/iwf-ai-csam-report_update-public-jul24v13.pdf 

AI-generated videos of child sexual abuse a ‘stark vision of the future’

AI-generated videos of child sexual abuse a ‘stark vision of the future’

Real victims’ imagery used in highly realistic ‘deepfake’ AI-generated films

22 July 2024 News
Threat of AI-generated content signals need for ‘new EU child sexual abuse laws to cover unknown imagery’

Threat of AI-generated content signals need for ‘new EU child sexual abuse laws to cover unknown imagery’

‘Worst nightmares’ have come true as predators are able to make thousands of AI images of real children at the click of a button, warns charity

23 October 2023 News
Prime Minister must act on threat of AI as IWF ‘sounds alarm’ on first confirmed AI-generated images of child sexual abuse

Prime Minister must act on threat of AI as IWF ‘sounds alarm’ on first confirmed AI-generated images of child sexual abuse

IWF analysts have also discovered an online “manual” dedicated to helping offenders refine their prompts and train AI to return more and more realistic results.

18 July 2023 News
IWF warning over use of AI-generated abuse images

IWF warning over use of AI-generated abuse images

A leading child protection organisation has warned that abuse of AI technology threatens to "overwhelm" the internet.

25 October 2023 IWF In The News