Technology companies can use our growing list of IntelliGrade hashes to stop the upload, sharing and storage of known child sexual abuse imagery on their platforms.
What you need to know about IntelliGrade, our powerful new tool helping companies and law enforcement bodies to fight back against online child sexual abuse images and videos.
Pioneering technology from the Internet Watch Foundation to help the internet community rid the internet of child sexual abuse imagery.
The National Crime Agency estimates there to be between 550,000 and 850,000 people in the UK who pose varying forms of sexual risk to children.
IWF and NSPCC's Report Remove can support a young person in reporting sexual images shared online and enables them to get the image removed if it is illegal.
We’re working in partnership with the End Violence Fund and the Lucy Faithfull Foundation to develop an innovative new chatbot to intervene and stop people looking at child sexual abuse imagery online before they’ve committed a crime.
A new report from the IWF shows how the pace of AI development has not slowed as offenders are using better, faster and more accessible tools to generate new criminal images and videos.
Dan Sexton joined IWF in February 2021. He is responsible for Information Technology, Cybersecurity and Software Development.
Tech Secretary sees ‘heartbreaking’ scale of online child sexual abuse on IWF hotline visit as ‘transformational’ online safety rules come into effect
A chilling excerpt from a new IWF report that delves into what analysts at the child protection charity currently see regarding synthetic or AI-generated imagery of child sexual abuse.
Huw Edwards’ offences highlight how WhatsApp can be abused by predators sharing criminal imagery of children, IWF warns. Dan Sexton, Chief Technology Officer at the IWF, appeared on national BBC Breakfast television this week (September 17) to warn Meta is not taking adequate steps to proactively prevent the sharing of child sexual abuse material on the platform.