News
Tighter rules come as IWF warns AI-generated child sexual abuse imagery reports have quadrupled in a year.
This partnership will bolster Hive’s capability to help its customers detect and mitigate CSAM on their platforms through a single, integrated API.
Fears ‘blatant get-out clause’ in safety rules may undermine efforts to crack down on criminal imagery.
Even the smallest platforms can help prevent child abuse imagery online.
Messaging app Telegram will deploy new tools to prevent the spread of images of child sexual abuse after teaming up with the Internet Watch Foundation.
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
The images that Nelson made have been linked back to real children around the world. In some cases, he then went on to encourage his clients to rape and sexually assault the youngsters.
Internet Watch Foundation says illegal AI-made content is becoming more prevalent on open web with high level of sophistication.
For press enquiries about our work, or to arrange an interview, please call +44 (0)1223 20 30 30 or 07377 727058 or 07929 553679 or email [email protected].