The IWF, the UK’s front line against online child sexual abuse, was among the first to raise the alarm last year over how realistic AI-generated images of child sexual abuse have become and the threat that misuse of the technology poses to both existing victims of child sexual abuse and to potential new victims.
Now, a new report update from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos, some of which are being found on the clear web.
Disturbingly, the ability to make any scenario a visual reality is welcomed by offenders, who crow in one dark web forum about potentially being able to “…create any child porn1 we desire… in high definition”.
In a snapshot study between March and April this year, the IWF identified nine deepfake videos on just one dark web forum dedicated to child sexual abuse material (CSAM) – none had been previously found when IWF analysts investigated the forum in October.
Some of the deepfake videos feature adult pornography which is altered to show a child’s face. Others are existing videos of child sexual abuse which have had another child’s face superimposed.
Because the original videos of sexual abuse are of real children, IWF analysts say the deepfakes are especially convincing.
Free, open-source AI software appears to be behind many of the deepfake videos seen by the IWF. The methods shared by offenders on the dark web are similar to those used to generate deepfake adult pornography.
The report also underscores how fast the technology is improving in its ability to generate fully synthetic AI videos of CSAM. One “shocking” 18-second fully AI-generated video, found by IWF analysts on the clear web, shows an adult male raping a girl who appears about 10 years old. The video flickers and glitches but IWF analysts describe the activity as clear and continuous.
While these types of videos are not yet sophisticated enough to pass for real videos of child sexual abuse, analysts say this is the ‘worst’ that fully synthetic video will ever be. Advances in AI will soon render more lifelike videos in the same way that still images have become photo-realistic.
Since April last year, the IWF has seen a steady increase in the number of reports of generative AI content. Analysts assessed 375 reports over a 12-month period, 70 of which were found to contain criminal AI-generated images of the sexual abuse of children2. These reports were almost exclusively all from the clear web.
Many of the images were being sold by offenders on the clear web in place of ‘real’ CSAM. These included dedicated commercial sites and forums which include links to subscription-based file hosting services.