
AI tools have put child sexual abuse ‘on steroids’, Home Secretary warns
The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.
Published: Fri 18 Oct 2024
Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a “tipping point”, according to a safety watchdog.
The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year.
The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers.
The IWF’s interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. “Recent months show that this problem is not going away and is in fact getting worse,” he said.
Read the full article at The Guardian.
The Home Office said fake images are being used to blackmail children and force them to livestream further abuse.