Telegram plans child abuse crackdown following Pavel Durov’s arrest in Paris
Messaging app Telegram will deploy new tools to prevent the spread of images of child sexual abuse after teaming up with the Internet Watch Foundation.
Published: Fri 18 Oct 2024
Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a “tipping point”, according to a safety watchdog.
The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year.
The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers.
The IWF’s interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. “Recent months show that this problem is not going away and is in fact getting worse,” he said.
Read the full article at The Guardian.
Messaging app Telegram will deploy new tools to prevent the spread of images of child sexual abuse after teaming up with the Internet Watch Foundation.
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
The images that Nelson made have been linked back to real children around the world. In some cases, he then went on to encourage his clients to rape and sexually assault the youngsters.