Analysis by age

Reports analysis

The following analysis is based on what is viewed on a URL or in a report received through a child reporting service. A URL will contain at least one, but can contain many tens, hundreds or thousands of images and videos showing child sexual abuse. A report made via child reporting services, such as Report Remove or Meri Trustline, can contain more than one image or video. 

Age comparison – past three years

IWF analysts record the age of the youngest child seen in any image on a URL or in imagery reported via child reporting services. This chart provides a three-year view of the number and proportion of the different age groups recorded by IWF analysts.

When 0-to-2-year-olds are seen in criminal imagery, our analysts say the greatest proportion depict Category A sexual activity.

In 2024 we observed much the same as in 2023, with children aged 11-13 and 7-10 being the most dominant age groups seen. Criminal reports showing 11-to-13-year-olds as the youngest child are the most often recorded by our analysts. 

We have seen a change in the older age groups in 2024. Reports where 14-to-15-year-olds have been recorded as the youngest child seen have risen 35%, with 5,457 recorded in 2024 compared to 4,056 in 2023.

Of 5,457 reports containing a 14-to-15-year-old, 34% can be attributed to three image hosts which repeatedly changed where they were physically hosted when we reported the criminal content on their services. Some of them moved so regularly that we took action on the same URLs up to 13 times, which resulted in 1,855 URLs being actioned in relation to those three sites alone. This demonstrates how easily illegal criminal can reappear, and that constant disruption of the distribution models of child sexual abuse material is vital.

The number of reports where 16-to-17-year-olds were the youngest child seen has risen 67% from 1,202 reports in 2023 to 2,010 in 2024. Due to the difficulty in verifying age within this group, we are unable to confirm if we are seeing more images and videos of child sexual abuse or if intelligence gained from our relationship with law enforcement, providing age verification, has just enabled us to take action on more.  

Some people approach the IWF directly to report the distribution or ‘leaking’ of their own intimate images. Often, the reporter is an adult, but is aware of images, or ‘sets’ of images and videos online – usually 'self-generated' that depict them as a child. 

Our ability to proactively search the internet for child sexual abuse material, means we can locate even more images of age-verified older children that have spread online. Our analysts can find these images on websites that typically host adult pornography, and on websites that promote ‘leaked’ material and offer collections of it for sale.

Higher numbers of actioned reports depicting 14-to-15-year-olds and 16-to-17-year-olds are probably a result of our effective relationship with law enforcement and use of the Government’s Child Abuse Image Database (CAID). This allows us to verify whether a suspected minor is a child.

These partnerships have helped us to overcome the challenges of visually determining the ages of some older teenagers. These collaborations have enabled us to verify, assess and remove a greater number of images that depict children aged 16-17.  In many cases, this figure represents ‘self-generated’ intimate images that have been ‘leaked’ without the consent of the child, who may currently be an adult. Our analysts often locate these images in ‘sets' or ‘collections’ curated by offenders and sometimes advertised for sale.

Once we can confidently assess that children with large volumes of widely distributed content are aged under 18, we can take action to remove large numbers of URLs and proactively search for more.  

Key notes: 

When our analysts assess a report, the age classification is based on the youngest child visible in the imagery. For example, a video including a 2-year-old, a 7-year-old and a 13-year-old would be assessed as ‘0-2’ to reflect the age of the youngest child. 

The same approach is applied to severity, with the most severe category of abuse visible in the imagery being recorded. In a composite image or video showing every category of abuse (from A to C), the analyst would log an assessment of Category A.

Where some reports include multiple images or videos, the same rule of ‘youngest visible child’ and ‘most severe visible category’ is applied, however, the two classifications may not be relevant to the same image. Sex can be recorded as Boys, Girls, Both or - in rare cases - Unidentified.

Upon finding criminal imagery our mission is to disrupt sites from circulating child sexual abuse imagery and to take steps to remove it. When we detect illegal images, we record the youngest child and worst severity, which validates the illegality of the URL and enables us to take steps to remove this content and add it to the IWF URL list. This is a service for IWF Members, which enables them to block these sites preventing unwanted public exposure to what is upsetting criminal imagery.

The benefit of this approach is that it enables us to review a far greater number of URLs that, if found to be criminal, we can take action to remove as quickly as possible. This prevents analysts’ time being absorbed, at this stage, in recording the volume of imagery hosted on any one URL and the exact details of all children and the severity seen on each and every image. 

All criminal imagery found through these reports by our analysts is then assessed via our grading tool (IntelliGrade). Our image assessors will then record details of all the children seen, including their ages, sex and the severity of abuse that can be applied to the images. These are then added to the IWF Hash List, a service which enables Members to block the individual images and videos.