Telegram plans child abuse crackdown following Pavel Durov’s arrest in Paris
Messaging app Telegram will deploy new tools to prevent the spread of images of child sexual abuse after teaming up with the Internet Watch Foundation.
Published: Mon 22 Jul 2024
Generative AI is exacerbating the problem of online child sexual abuse materials (CSAM), as watchdogs report a proliferation of deepfake content featuring real victims' imagery.
Published by the UK's Internet Watch Foundation (IWF), the report documents a significant increase in digitally altered or completely synthetic images featuring children in explicit scenarios, with one forum sharing 3,512 images and videos over a 30 day period. The majority were of young girls. Offenders were also documented sharing advice and even AI models fed by real images with each other.
"Without proper controls, generative AI tools provide a playground for online predators to realize their most perverse and sickening fantasies," wrote IWF CEO Susie Hargreaves OBE. "Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet."
Read the full article at Mashable.
Messaging app Telegram will deploy new tools to prevent the spread of images of child sexual abuse after teaming up with the Internet Watch Foundation.
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
The images that Nelson made have been linked back to real children around the world. In some cases, he then went on to encourage his clients to rape and sexually assault the youngsters.