We don’t buy-in data. We’re the source.

Published:  Fri 23 Jun 2017

As IWF’s Commercial Relationship Manager, I speak to internet service providers, domain registries, hosting companies, and, of course, our Members every day. I recently spoke to a filtering company that asked me where we (the IWF) buy our data from to compile our services. When I explained that we don’t buy in any data, I was left with the distinct impression that they didn’t believe me.

I understand that the nature of our work is very unique and it may be hard to believe that our analysts manually assess and remove thousands of images of child sexual abuse each week. But to them who do this every day, they’re creating a solution.

As one of our analysts, Isobel, says in our 2016 annual report: “I know that going to work means that I'm going to remove images of child sexual abuse, I'm going to stop people stumbling across these images, I'm going to disrupt paedophiles from posting or sharing these images, and I'm going to stop these young victims being re-victimised over and over again.”

53% of the images removed last year were of children aged 10 or younger. 2% even showed children as young as 2 being sexually abused.

Our in-house expert analysts assess every single report that comes into our Hotline. If the image is found to be criminal, they then make sure to get it removed from the internet as quick as possible.

Each webpage we find showing child sexual abuse imagery are added to the IWF URL List - an encrypted list of individual webpages that contain child sexual abuse material. This allows companies to filter and block these pages while we work internationally behind the scenes to get the imagery removed.

Each image we find to be criminal is added to the IWF Image Hash List. The Hash List is a list of ‘digital fingerprints’ – unique codes – that directly relate to individual child sexual abuse images. When deployed to a company, our ‘hashes’ stop anyone from uploading, downloading, viewing or hosting such images on our Members’ platforms. We currently have more than 247k hashed images. This means that our analysts have viewed, assessed and hashed almost a quarter of a million images showing children being sexually abused, tortured, and raped. Again, all done at source.

We’re a non-for-profit body, set up by the online industry to help you keep your networks safe. We’re not only the source. We’re independent from police and government. We’re the trusted body of the internet industry. 

Tik Tok’s bold step puts children’s safety before the rush for extreme privacy - more should follow their example

Tik Tok’s bold step puts children’s safety before the rush for extreme privacy - more should follow their example

IWF CEO Kerry Smith welcomes TikTok’s decision to prioritise child protection over end‑to‑end encryption.

9 March 2026 Blog
Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Why the EU’s temporary law allowing companies to detect child sexual abuse online must be extended

Child safety is on the line - the EU must extend its temporary law before vital protections are turned off.

9 March 2026 Blog
CSA partners from around the world join forces to say No to Nudify Apps

CSA partners from around the world join forces to say No to Nudify Apps

On Safer Internet Day 2026, the IWF and child protection partners worldwide unite to call for a global ban on AI nudify apps and tools.

10 February 2026 Blog