The Hotline is split into two workstreams as illustrated below; reports assessments and imagery assessments. On this page, our focus and analysis is on the outcomes of the imagery assessment workflow.
Images and video analysis
We use our bespoke grading and hashing software, called IntelliGrade, to enable us to perform an image-level assessment of images and videos. This also creates a hash of the image or video. A hash is a type of digital fingerprint created using a mathematical algorithm that identifies a picture of confirmed child sexual abuse.
Each hash is completely unique. Once an image has been hashed, it enables us to share these with industry and law enforcement, eliminating the need to view the criminal imagery.
Hashes are quickly recognised; this means thousands of criminal pictures can be blocked from ever being uploaded to the internet in the first place.
The IWF began hashing child sexual abuse images in 2015.
In 2021 we launched our Taskforce, a team dedicated to assessing images and videos, which was originally funded by Thorn and then by the Home Office as part of partnership with the UK Government’s Child Abuse Image Database. (CAID)
As well as providing an analysis based on ‘reports’ of child sexual abuse, where a report of a single URL may contain one, or many tens, hundreds or even thousands of different images, we also record additional information on individual criminal images. For singular imagery we are able to record more detailed information, called metadata, such as the age and sex of each child, and the type of sexual activity seen.
When our assessors review an image, they identify and record the most severe category of sexual activity and the age and sex of the youngest child subject to this activity. They will then record the age and sex of any additional children seen in the same image. Each image or video will therefore only have one category assessment regardless of any additional sexual activity seen.
For example, in an image showing Category A abuse of an 11-to-13-year-old and where a 0-to-2-year-old is also seen, Category A would be recorded for the image. The 0-to-2-year-old would be recorded as an additional child, but no additional category would be applied.
There is additional complexity when assessing videos and collages comprised of multiple images - including those that may show several children – which means that we only record severity in these cases. This contributes to greater efficiency and helps to protect the wellbeing of our image assessors without compromising the accuracy of our work.
Image analysis
Note: Some older hashes are not included in the above table of sexual activity metadata.
Images and video analysis
In 2024 we assessed 1,264,393 images and videos through our hashing tool Intelligrade. Below we show a breakdown of the imagery that was assessed to be criminal.
This chart details imagery of child sexual abuse by severity (category). There is only one severity category recorded for each image or video.
As each hash is attributed to a unique image, any copy of an existing hashed image can be identified using the same hash value – identical images or videos will share the same digital fingerprint. Once an image or video has been given a hash, further copies do not need to be hashed again. This technology makes hashing an exceptionally efficient way to find child sexual abuse images and videos and remove them from the internet, as well as blocking their upload.
Multichild analysis
In IntelliGrade, images and videos are enriched with contextual metadata, such as the age of the child seen in the image and the severity of the abuse. Previously, if an image featured more than one child, only information about the youngest visible child could be recorded.
However, improvements made to IntelliGrade as a direct result of funding from Nominet’s Countering Online Harms fund have enabled us to record the age and sex of all children seen within singular images for the first time. This Multichild format became effective from 1 January 2024.
The table below shows the number of images processed since the Multichild format was introduced (excluding videos and multi-image collages).
This demonstrates how many images displayed one child and how many displayed two or more in the same image. The table also shows the breakdown of total number of children seen and whether they were seen alone or with other children.
This highlights that we have been able to provide assessment data on an additional 70,898 children.
Below is a breakdown of the age and sex of all the children recorded.
Multichild analysis
Please note that severity, age, and sex are recorded for all single images; videos and collages comprising multiple images have been excluded from the above chart.
The ‘Unidentified’ bar shows where it was not possible to identify whether the image was depicting a boy or a girl. Videos and collages comprising multiple images have been excluded from the above chart.