
AI tools have put child sexual abuse ‘on steroids’, Home Secretary warns
The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.
Published: Mon 22 Jul 2024
The amount of AI-generated child sexual abuse material (CSAM) posted online is increasing, a report published Monday found.
The report, by the U.K.-based Internet Watch Foundation (IWF), highlights one of the darkest results of the proliferation of AI technology, which allows anyone with a computer and a little tech savvy to generate convincing deepfake videos. Deepfakes typically refer to misleading digital media created with artificial intelligence tools, like AI models and applications that allow users to “face-swap” a target’s face with one in a different video. Online, there is a subculture and marketplace that revolves around the creation of pornographic deepfakes.
In a 30-day review this spring of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with artificial intelligence, most of them realistic. The number of CSAM images found in the review was a 17% increase from the number of images found in a similar review conducted in fall 2023.
Read the full article at NBC News.
The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.