Online child sex abuse material, boosted by AI, is outpacing Big Tech's regulation

Published:  Mon 22 Jul 2024

Generative AI is exacerbating the problem of online child sexual abuse materials (CSAM), as watchdogs report a proliferation of deepfake content featuring real victims' imagery.

Published by the UK's Internet Watch Foundation (IWF), the report documents a significant increase in digitally altered or completely synthetic images featuring children in explicit scenarios, with one forum sharing 3,512 images and videos over a 30 day period. The majority were of young girls. Offenders were also documented sharing advice and even AI models fed by real images with each other.

"Without proper controls, generative AI tools provide a playground for online predators to realize their most perverse and sickening fantasies," wrote IWF CEO Susie Hargreaves OBE. "Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet."

Read the full article at Mashable.

AI tools have put child sexual abuse ‘on steroids’, Home Secretary warns

AI tools have put child sexual abuse ‘on steroids’, Home Secretary warns

The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.

2 February 2025 IWF In The News
UK makes use of AI tools to create child abuse material a crime

UK makes use of AI tools to create child abuse material a crime

Britain will make it illegal to use artificial intelligence tools that create child sexual abuse images.

1 February 2025 IWF In The News
Charity finds more than 500,000 child abuse victims

Charity finds more than 500,000 child abuse victims

An analyst who removes child sexual abuse content from the internet says she is always trying to stay "one step ahead" of the "bad guys".

8 December 2024 IWF In The News