Prime Minister must act on threat of AI as IWF ‘sounds alarm’ on first confirmed AI-generated images of child sexual abuse

Published:  Tue 18 Jul 2023

‘My worry is if this material becomes more widely and easily available, and can be produced at will – at the click of a button- we are heading for a very dangerous place in the future.’

  • In just five weeks*, the IWF investigated 29 reports of URLs containing suspected AI-generated child sexual abuse imagery. Of these, the IWF confirmed seven URLs containing AI-generated child sexual abuse imagery.
  • The pages removed by the IWF included Category A and Category B material, with children as young as 3 to 6 years old. Both females and males were depicted.
  • IWF analysts have also discovered an online “manual” dedicated to helping offenders refine their prompts and train AI to return more and more realistic results.
  • Calls for AI companies and politicians to do more to prevent the abuse of AI tools, and to protect users from the spread of AI-generated child sexual abuse imagery.

The Prime Minister must prioritise the threat of AI-generated child sexual abuse imagery, as “astoundingly realistic” AI-generated imagery of children as young as three is now being discovered online.

The Internet Watch Foundation (IWF) has confirmed it has begun to see AI-generated imagery of child sexual abuse being shared online, with some examples being so realistic they would be indistinguishable from real imagery to most people.

The IWF is the UK body responsible for finding and removing child sexual abuse material from the internet.

In a five week period*, the IWF investigated 29 reports of URLs containing suspected AI-generated child sexual abuse imagery. These included reports from members of the public.

Of these, the IWF was able to confirm seven URLs contained AI-generated child sexual abuse imagery. These URLs can contain multiple images which can be exclusively AI generated or a mixture of both AI and real images.

Susie Hargreaves OBE, Chief Executive of the IWF, said: “AI is getting more sophisticated all the time. We are sounding the alarm and saying the Prime Minister needs to treat the serious threat it poses as the top priority when he hosts the first global AI summit later this year.

“We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery. This would be potentially devastating for internet safety and for the safety of children online. We have a chance, now, to get ahead of this emerging technology, but legislation needs to be taking this into account, and must be fit for purpose in the light of this new threat.

“Offenders are now using AI image generators to produce sometimes astoundingly realistic images of children suffering sexual abuse.

“For members of the public – some of this material would be utterly indistinguishable from a real image of a child being sexually abused. Having more of this material online makes the internet a more dangerous place.”

Susie Hargreaves OBE, IWF CEO
Susie Hargreaves OBE, IWF CEO

The IWF undertook a five week study to better understand what threats may be posed by emerging AI technology.

  • In May, the Internet Watch Foundation (IWF) began recording instances of AI-generated child sexual abuse material reported to its hotline for the first time.
  • Between May 24 and June 30, the IWF investigated 29 reports of URLs containing suspected AI-generated child sexual abuse imagery. These included reports from members of the public.
  • Of these, the IWF was able to confirm seven URLs contained AI-generated child sexual abuse imagery. These URLs can contain multiple images which can be exclusively AI generated or a mixture of both AI and real images.
  • The URLs actioned by the IWF included sites containing Category A and Category B material, with children as young as 3 to 6 years old. Both female and male children were depicted.

AI-generated images of child sexual abuse are illegal in the UK. Far from being a victimless crime, this imagery can normalise and ingrain the sexual abuse of children. It can also make it harder to spot when real children may be in danger.

Dan Sexton, Chief Technical Officer at the IWF,  said: "The IWF’s primary mission is to protect children. If a child can be identified and safeguarded, that is always the number one priority for analysts.

“Our worry is that, if AI imagery of child sexual abuse becomes indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist.

“This would mean real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed."

Dan Sexton, IWF CTO
Dan Sexton, IWF CTO

Chris Farrimond, Director of Threat Leadership at the National Crime Agency, said: “AI systems, as they become more widely used, will potentially make it easier for abusers to commit a range of child sexual abuse offences.

“As the IWF study has identified, we are currently seeing AI-generated content feature in a handful of cases, but the risk from this is increasing and we are taking it extremely seriously. 

“The creation or possession of pseudo-images – one created using AI or other technology – is an offence in the UK. As with other such child sexual abuse material viewed and shared online, pseudo-images also play a role in the normalisation and escalation of abuse among offenders.

“Tackling child sexual abuse in all its forms is a priority for the NCA and alongside our policing partners, we are currently arresting around 800 people and safeguarding around 1,200 children every month. We will investigate individuals who create, share, possess, access or view a pseudo-image in the same way as if the image is of a real child.

“There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection.

“Alongside partners, we are working with industry on two key outcomes: mitigating future offending by developing AI models which cannot be used to generate CSA material, and ensuring our ability to identify illegal activity keeps up with technology.”

As well as finding and removing instances of AI generated child sexual abuse material, IWF analysts have discovered an online “manual” written by offenders with the aim of helping other criminals train the AI and refine their prompts to return ever more realistic results.

Mr Sexton said IWF analysts have seen evidence online offenders are circumventing safety measures put in place on AI image generators to have them produce increasingly realistic sexual imagery of children.

Ms Hargreaves said the threat posed by these images is real, and called on AI companies to help stop the abuse of their platforms.

She said: “We see online communities of offenders discussing ways they can get around the safety controls set up to prevent the abuse of these tools.

“We know criminals can and do exploit any new technology, and we are at a crossroads with AI. The continued abuse of this technology could have profoundly dark consequences – and could see more and more people exposed to this harmful content.

“Depictions of child sexual abuse, even artificial ones, normalise sexual violence against children. We know there is a link between viewing child sexual abuse imagery and going on to commit contact offences against children.

“My worry is if this material becomes more widely and easily available, and can be produced at will – at the click of a button- we are heading for a very dangerous place in the future.”

Information on how companies can work with the IWF to help safeguard against the spread of child sexual abuse imagery can be found here

MP visits charity on the front line of the fight against child sexual abuse on the internet

MP visits charity on the front line of the fight against child sexual abuse on the internet

Local MP Ian Sollom learned about the herculean task faced by analysts at the Internet Watch Foundation (IWF) who find, assess and remove child sexual abuse material on the internet.

10 December 2024 News
World-leading Report Remove tool in the spotlight

World-leading Report Remove tool in the spotlight

The Internet Watch Foundation and the NSPCC have won an award that recognises the vital service that the Report Remove tool offers children in the UK.

5 December 2024 News
Telegram joins IWF in child sexual abuse imagery crackdown

Telegram joins IWF in child sexual abuse imagery crackdown

IWF data and tools will help prevent the platform’s users being exposed to child sexual abuse imagery

4 December 2024 News