White House roundtable is 'important moment' in recognising threat of AI child sexual abuse imagery

Published:  Mon 13 Nov 2023

AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.

Today (November 13), Ms Hargreaves attended the White House Roundtable on Preventing AI-Generated Image-Based Sexual Abuse.

The White House convened the event as a follow-up to the U.K.’s Global AI Safety Summit and the release of the Biden - Harris Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI.

The roundtable brought together experts from the U.S and U.K, global civil society advocates, survivors, and researchers to discuss policy and technology-based recommendations for preventing and addressing AI-generated image-based sexual abuse.

The event was chaired jointly by Rachel Vogelstein, Deputy Director and Special Assistant to the President at the White House Gender Policy Council and Special Advisor on Gender at the White House National Security Council, and Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology.

Ms Hargreaves said: “AI-generated child sexual abuse imagery is a very real threat we are facing right now. Putting this incredibly powerful technology in the hands of sexual predators and people wanting to create harmful material has terrifying potential to flood the internet with a tsunami of abuse imagery.

“This will normalise the sexual abuse of children, and undermine our efforts to make the internet a safer place, and to identify and protect real victims.

“I am pleased this threat is being taken seriously – and being invited to talk about the dangers at the White House is an important moment. We need to see world Governments working in cooperation to get a grip on this threat now, before it really is too late.”

IWF CEO Susie Hargreaves, third from right, joined a select group of experts for a roundtable at the White House, including (l-r) Dr Elissa Redmiles, Georgetown University; David Wright, South West Grid for Learning; Rachel Vogelstein, Special Assistant to the President and Deputy Director, White House Gender Policy Council; Michelle Donelan MP, Secretary of State for the UK’s DSIT; NCMEC CEO Michelle DeLaune; & Dr Rebecca Portnoff, Head of Data Science at Thorn.
IWF CEO Susie Hargreaves, third from right, joined a select group of experts for a roundtable at the White House, including (l-r) Dr Elissa Redmiles, Georgetown University; David Wright, South West Grid for Learning; Rachel Vogelstein, Special Assistant to the President and Deputy Director, White House Gender Policy Council; Michelle Donelan MP, Secretary of State for the UK’s DSIT; NCMEC CEO Michelle DeLaune; & Dr Rebecca Portnoff, Head of Data Science at Thorn.

Last month, the IWF published a major study into the abuse of AI image generators, which criminals are using to produce life-like child sexual abuse imagery.

The study focused on a single dark web forum dedicated to child sexual abuse imagery.

In a single month (September 1 – September 31, 2023)

  • The IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum.
  • Of these, 2,978 were confirmed as images which breach UK law – meaning they depicted child sexual abuse.
  • Of these images, 2,562 were so realistic, the law would need to treat them the same as if they had been real abuse images*.
  • More than one in five of these images (564) were classified as Category A, the most serious kind of imagery which can depict rape, sexual torture, and bestiality.
  • More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old).
  • As well as this, 143 images depicted children aged three to six, while two images depicted babies (under two years old).

The UK has been quick to spot the dangers of AI-generated child sexual abuse imagery. In October, the IWF and the Home Office held an event in the lead up to the UK government’s AI Safety Summit.

The event saw 27 organisations, including the IWF, TikTok, Snapchat, Stability AI, and the governments of the US and Australia, sign a pledge to tackle the threat of AI-generated child abuse imagery.

Signatories to the joint statement pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”.

The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.

New pilot shows way for smaller platforms to play big part in online safety

New pilot shows way for smaller platforms to play big part in online safety

Even the smallest platforms can help prevent child abuse imagery online.

19 December 2024 Blog
The IWF relies on strong partnerships with the tech industry

The IWF relies on strong partnerships with the tech industry

Internet Watch Foundation Interim CEO Derek Ray-Hill writes on why we are working with Telegram to tackle child sexual abuse material online.

18 December 2024 Blog
IWF and RM celebrate  20th anniversary of partnership

IWF and RM celebrate 20th anniversary of partnership

Heidi Kempster, Deputy CEO of the Internet Watch Foundation (IWF), and Jason Tomlinson, Managing Director of RM Technology, reflect on the organisations’ two-decade partnership tackling child sexual abuse material.

24 October 2024 Blog