AI-generated videos of child sexual abuse a ‘stark vision of the future’

Published:  Mon 22 Jul 2024

  • Real victims’ imagery used in highly realistic ‘deepfake’ AI-generated films
  • First fully synthetic child sexual abuse videos identified
  • Offenders share AI models for more than 100 child sexual abuse victims
  • Instances of more extreme AI-generated images on the rise

AI-generated imagery of child sexual abuse has progressed at such a “frightening” rate that the Internet Watch Foundation (IWF) is now seeing the first convincing examples of AI videos depicting the sexual abuse of children.

These incredibly realistic ‘deepfake’, or partially synthetic, videos of child rape and torture are made by offenders using AI tools that add the face or likeness of another person to a real video.

The IWF, the UK’s front line against online child sexual abuse, was among the first to raise the alarm last year over how realistic AI-generated images of child sexual abuse have become and the threat that misuse of the technology poses to both existing victims of child sexual abuse and to potential new victims.

Now, a new report update from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos, some of which are being found on the clear web.

Disturbingly, the ability to make any scenario a visual reality is welcomed by offenders, who crow in one dark web forum about potentially being able to “…create any child porn1 we desire… in high definition”. 

In a snapshot study between March and April this year, the IWF identified nine deepfake videos on just one dark web forum dedicated to child sexual abuse material (CSAM) – none had been previously found when IWF analysts investigated the forum in October.

Some of the deepfake videos feature adult pornography which is altered to show a child’s face. Others are existing videos of child sexual abuse which have had another child’s face superimposed.

Because the original videos of sexual abuse are of real children, IWF analysts say the deepfakes are especially convincing.

Free, open-source AI software appears to be behind many of the deepfake videos seen by the IWF. The methods shared by offenders on the dark web are similar to those used to generate deepfake adult pornography.

The report also underscores how fast the technology is improving in its ability to generate fully synthetic AI videos of CSAM. One “shocking” 18-second fully AI-generated video, found by IWF analysts on the clear web, shows an adult male raping a girl who appears about 10 years old. The video flickers and glitches but IWF analysts describe the activity as clear and continuous.

While these types of videos are not yet sophisticated enough to pass for real videos of child sexual abuse, analysts say this is the ‘worst’ that fully synthetic video will ever be. Advances in AI will soon render more lifelike videos in the same way that still images have become photo-realistic.

Since April last year, the IWF has seen a steady increase in the number of reports of generative AI content. Analysts assessed 375 reports over a 12-month period, 70 of which were found to contain criminal AI-generated images of the sexual abuse of children2. These reports were almost exclusively all from the clear web.

Many of the images were being sold by offenders on the clear web in place of ‘real’ CSAM. These included dedicated commercial sites and forums which include links to subscription-based file hosting services.

Susie Hargreaves OBE, IWF CEO
Susie Hargreaves OBE, IWF CEO

Internet Watch Foundation CEO Susie Hargreaves OBE said: “Generative AI technology is evolving at such a pace that the ability for offenders to now produce, at will, graphic videos showing the sexual abuse of children is quite frightening.

“The fact that some of these videos are manipulating the imagery of known victims is even more horrifying. Survivors of some of the worst kinds of trauma now have no respite, knowing that offenders can use images of their suffering to create any abuse scenario they want.

“Without proper controls, generative AI tools provide a playground for online predators to realise their most perverse and sickening fantasies. Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet.

“The right decisions made now – by government, the tech industry and civil society – to ensure that the safety of children is a priority in the design of AI tools could stave off the devastating impact that misuse of this technology will have on global child protection.” 

The snapshot study also assessed more than 12,000 new AI-generated images posted to a dark web forum over a month. Most of these (90%) were so convincing that they could be assessed under the same law as real CSAM3. Analysts confirmed that more than 3,500 images were criminal and depicted the sexual abuse of children, the majority of which were girls.

Analysts further found that the severity of the AI-generated child sexual abuse had got worse since they first investigated the dark web forum. More than 955 pseudo-photographs (32%) were graded as Category A – depicting penetrative sex, bestiality or sadism – an increase of 10 percentage points4.

IWF Internet Content Analyst Alex5 said: “This is an indication of rapid advances in technology and expertise. Previously, some AI-generated scenarios were difficult to portray with much realism, but now we are seeing offenders experiencing success generating more extreme, ‘hardcore’, pseudo-photographs of child sexual abuse. These complex scenes usually show sexual penetration and involve more than one person.

“So-called deepfake videos, which now use AI technology to alter existing imagery, can also be very difficult to separate from ‘real’ child sexual abuse material, even under an expert analyst eye. Sometimes it is because we already recognise the victim that we can determine the difference.

“It’s no matter that the fully AI-generated videos that we are now seeing are at a rudimentary stage, they are a stark vision of the future, are still criminal and shocking to view.”

Deborah Denis, CEO of Lucy Faithfull Foundation
Deborah Denis, CEO of Lucy Faithfull Foundation

Deborah Denis, CEO of Lucy Faithfull Foundation, said: “Adults viewing and sharing sexual images of children is a major problem and one that AI is making worse. AI and its capabilities are rapidly evolving and there is an unacceptable lack of safeguards within the technology which allows online child sex offenders to exploit it every day. It’s vital that tech companies and politicians do more to address these dangers as a matter of urgency.

“Our research shows there are serious knowledge gaps amongst the public regarding AI - specifically its ability to cause harm to children. The reality is that people are using this new, and unregulated, technology to create some of the worst sexual images of children, as well as so-called ’nudified’ images of real children, including children who have been abused.

“People must know that AI is not an emerging threat – it’s here, now. We need the public to be absolutely clear that making and viewing sexual images of under-18s, whether AI-generated or not, is illegal and causes very serious harm to real children across the world.”

Victoria Green, Marie Collins Foundation Chief Executive
Victoria Green, Marie Collins Foundation Chief Executive

Marie Collins Foundation Chief Executive Victoria Green said: “The impact of AI generated child sexual abuse images on victims and survivors cannot be overstated. When images or videos of child sexual abuse are created, the permanency and lack of control over who sees them creates significant and long-term impacts for those with lived experience. They are revictimised every time these are viewed, and this is no different with AI images.

“To know that offenders can now use easily available AI technology to create and distribute further content of their abuse is not only sickening for victims and survivors but causes immense anxiety.

“We urgently need big tech and government to take a joint approach to regulate the use of AI tools. Victims and survivors have a right not to live in fear of revictimisation by technology which should be safe by design.”

Offenders on the dark web forum investigated by the IWF openly discussed and shared advice on how to use generative AI technology to develop child sexual abuse imagery.

Step-by-step direction is given for offenders to make their own ‘child porn1’ and requests are made for fine-tuned CSAM models of particular, named victims or celebrities. IWF analysts have recognised one victim, ‘Olivia’, whose story was told in our 2018 annual report.

One offender shared links to fine-tuned models for 128 different named victims of child sexual abuse.

However, tutorials to generate AI CSAM as well as the making of fine-tuned AI CSAM models remain legal at this point in time. The UK prohibition on paedophile manuals does not include pseudo-photographs of children and a gap in UK law means that offenders cannot be prosecuted for making AI models fine tuned on CSAM.

Notes

1 The IWF uses the term child sexual abuse to reflect the gravity of the images and videos we deal with. ‘Child pornography’, ‘child porn’ and ‘kiddie porn’ are not acceptable descriptions. A child cannot consent to their own abuse. Read more.

2Graph shows month by month breakdown of reports assessed by the IWF containing AI-generated material from April 2023 to March 2024.

IWF reports containing AI-generated content, March 2003 to April 2004

3AI-generated child sexual abuse material in the UK falls under two different laws:

  • The Protection of Children Act 1978 (as amended by the Criminal Justice and Public Order Act 1994). This law criminalises the taking, distribution and possession of an “indecent photograph or pseudo-photograph of a child”.
  • The Coroners and Justice Act 2009. This law criminalises the possession of “a prohibited image of a child”. These are non-photographic – generally cartoons, drawings, animations or similar.

4 Graph shows comparative breakdown of AI-generated child sexual abuse pseudo-photographs found on the same dark web forum in October 2023 and April 2024 by severity of category.

  • Category A: Images depicting penetrative sexual activity; images involving sexual activity with an animal; or sadism.
  • Category B: Images depicting non-penetrative sexual activity.
  • Category C: Other indecent images not falling within categories A or B.

AI CSAM pseudo images by severity

5Alex is a pseudonym used to protect the analyst’s privacy.

The horror and the heartbreak - how one child sexual abuse survivor’s torment will never end thanks to AI

The horror and the heartbreak - how one child sexual abuse survivor’s torment will never end thanks to AI

The right legislative steps could help to lessen the harm

22 July 2024 News
Stability AI joins IWF’s mission to make internet a safer space for children

Stability AI joins IWF’s mission to make internet a safer space for children

Leading AI company partners with Internet Watch Foundation to tackle creation of AI generated child sexual abuse material

8 July 2024 News
IWF sounds alarm for young people and parents as sharing of nudes becomes ‘normalised’ in UK schools

IWF sounds alarm for young people and parents as sharing of nudes becomes ‘normalised’ in UK schools

A new national campaign features suggestive images of fruit, while radio ads feature Cunk on Earth star Diane Morgan.

17 June 2024 News