AI-generated child sexual abuse

Report analysis

AI-generated images of child sexual abuse can look like ‘real’ images of child sexual abuse, or they can look like non-photographic images (NPI) of child sexual abuse.

If they look like ‘real’ images of child sexual abuse, under UK law they are treated exactly like ‘real’ child sexual abuse images. These are added to the IWF URL List and IWF Hash List and removal of the content is pursued.

If they look like non-photographic images of child sexual abuse, for example computer-generated or cartoon-like images, they are treated as prohibited imagery. These are added to the IWF NPI List.

In both cases, the content is criminal under UK law.

AI-generated sexual images and videos of children are extremely harmful, particularly as the creation of this imagery uses real children’s faces or bodies, whether directly within the imagery itself or used behind the scenes as a training tool for the AI generators. Life-like images of children can then be created by either manipulating known child sexual abuse material or by using prompts within the AI tool to generate new content of child sexual abuse within just a few clicks. This technology can create child sexual abuse imagery on a vast scale and at speed.

The table below provides a breakdown of the 2024 reports where an IWF analyst has identified and ‘tagged’ that they have seen AI-generated criminal content.  

 

Category under the law Actioned AI child sexual abuse reports
Child sexual abuse imagery 193
Prohibited imagery 52
Total 245
  • In total, 245 reports processed in 2024 contained actionable AI-generated images of child sexual abuse. This is a 380% increase on 2023 where just 51 contained actionable AI-generated images of child sexual abuse.
  • 193 of these reports contained images that looked ‘real’, so were processed as such. In 2023 we saw a total of 42.

AI-generated child sexual abuse is a growing concern and is being monitored closely by the IWF, especially as AI image-generation tools are evolving rapidly and becoming easier to use. 

Images and video analysis

 

The 245 reports that were found to be displaying criminal AI imagery of child sexual abuse equated to 7,644 images and a small number of videos. This is a good example of when analysts might action one URL that contains multiple different criminal images and videos.

AI images and videos actioned – by severity

  • Category A: Images involving penetrative sexual activity; images involving sexual activity with an animal; or sadism.
  • Category B: Images involving non-penetrative sexual activity.
  • Category C: Other indecent images not falling within categories A or B.

7,063 of these images and videos were realistic enough to be assessed and actioned in the same way as ‘real’ imagery; only a small proportion of what we actioned was recorded as prohibited imagery (581).

The most convincing AI-generated child sexual abuse imagery can be visually indistinguishable from real images and videos, even for trained IWF analysts. As AI technology advances, it is increasingly likely that AI imagery will be unknowingly assessed as being real. Our analysts currently record imagery as being AI-generated only if its source or metadata suggests this. The true number of AI images and videos we have seen could be even higher.

 

Images and video analysis

AI images and videos actioned – by sex

There are an additional 41 composite images or videos where sex is not recorded.

Of all the AI imagery seen this year 98%, or 6,945 showed the sexual abuse of girls.

 

This type of imagery is not just available on the ‘dark web’ but increasingly appears across a range of sites on the open internet. The IWF has seen an escalation in the creation of AI-generated child sexual abuse since 2023: more Category A (depicting the most severe abuse) imagery is being produced, and AI models are used to generate images and videos of known victims ‘on demand’. In 2024, the quality of AI-generated videos improved exponentially, and all types of AI imagery assessed appeared significantly more realistic as the technology developed.

IWF snapshot study of AI-generated child sexual abuse imagery posted on a dark web forum

You can read more in our AI child sexual abuse report, released in October 2023 and updated in July 2024. It includes a snapshot study of AI-generated child sexual abuse imagery posted on a dark web forum, which is not reflected in the URL figures above.