Our IntelliGrade process makes it easier for companies and law enforcement bodies to use those hashes to help make the internet a safer place
Technology companies can use our growing list of IntelliGrade hashes to stop the upload, sharing and storage of known child sexual abuse imagery on their platforms.
What you need to know about IntelliGrade, our powerful new tool helping companies and law enforcement bodies to fight back against online child sexual abuse images and videos.
The National Crime Agency estimates there to be between 550,000 and 850,000 people in the UK who pose varying forms of sexual risk to children.
IWF and NSPCC's Report Remove can support a young person in reporting sexual images shared online and enables them to get the image removed if it is illegal.
A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
We’re working in partnership with the End Violence Fund and the Lucy Faithfull Foundation to develop an innovative new chatbot to intervene and stop people looking at child sexual abuse imagery online before they’ve committed a crime.
Dan Sexton joined IWF in February 2021. He is responsible for Information Technology, Cybersecurity and Software Development.
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
Huw Edwards’ offences highlight how WhatsApp can be abused by predators sharing criminal imagery of children, IWF warns. Dan Sexton, Chief Technology Officer at the IWF, appeared on national BBC Breakfast television this week (September 17) to warn Meta is not taking adequate steps to proactively prevent the sharing of child sexual abuse material on the platform.