As online child sexual abuse soars, we urge companies to bring in additional child protection measures if they intend to fully encrypt their platforms.
Dan is Chief Technical Officer at the IWF. He is a leading voice in Information Technology, Cybersecurity and Software Development. Dan is available to comment on stories and participate in interviews in print and broadcast formats. He can offer expertise on emerging digital trends and technological developments, as well as the impact of new legislation on tech.
Emma is Communications Director at the IWF, and is also a Director of the UK Safer Internet Centre. She is an expert in the fight against child sexual abuse online, and a leading figure in online safety and prevention measures. She is available to comment on stories and participate in interviews in print and broadcast formats.
To organise interviews or comment, please contact:
Josh Thomas, IWF Press Manager
+44 (0) 1223 20 30 30 | +44 07377 727058 [email protected]
Or Cat McShane, IWF Press Officer
+44 (0) 1223 61 87 52 | +44 07572 783 227 [email protected]
Images and videos of children, including toddlers and newborn babies, suffering sexual abuse and rape are proliferating across the internet. The children are real, and their suffering can last a lifetime. This abhorrent content is illegal to share.
The scale of the problem online is becoming clearer all the time. Last year (2022), the IWF investigated a total of 375,230 reports suspected to contain child sexual abuse imagery.
Of these, 255,580 reports were confirmed to contain images or videos of children suffering sexual abuse.
The IWF’s most recent data shows not only is there more child sexual abuse material circulating online than ever before, but it is also getting more and more extreme.
In 2022, a record-breaking 51,369 of the webpages the IWF took action to remove or block from the internet contained Category A child sexual abuse material.
This is the most severe kind of imagery, and can include the worst kinds of sexual abuse, including the rape of children, babies, and even newborns, as well as acts including bestiality, or sadism.
The amount of this kind of content has doubled since 2020 when the IWF uncovered 25,050 pages containing Category A abuse.
Lawmakers in the UK and EU are now taking steps to address the situation and improve internet safety, with the UK’s Online Safety Bill and the European Commission’s proposed regulations both making it clear companies must detect and prevent the spread of child sexual abuse material online, whether a service is end-to-end encrypted or not.
Despite this, more companies have declared their intention to introduce end-to-end encryption, with no clear indication of how they will ensure their platforms do not become havens for the spread of child sexual abuse imagery.
Standard encryption is widely used across the internet for all kinds of services including banking apps, health records, and messaging services. It is used to keep private information private.
But end-to-end encryption goes even further, meaning even the service providers themselves can’t see what has been shared between two users.
It means that service providers who deploy end-to-end encryption on their platforms and messaging services are actively disabling their ability to detect child sexual abuse imagery. That is, unless they deploy additional safeguards.
People have a right to privacy, but children have a right to grow up in a world free from child sexual abuse and exploitation.
Any move to introduce end-to-end encryption to a messaging platform without bringing in additional safety features to prevent the spread of child sexual abuse material would mean this material can be shared and traded freely. Essentially, a safe space would be created for abusers.
Our vision is an internet that is free of child sexual abuse material, and a big part of how we achieve our mission is providing data on known images and videos to platforms so they can detect and block that content to prevent it being uploaded, saved, shared, and distributed. So far, this is not something any company is doing within end-to-end encrypted services.
When the IWF finds child sexual abuse imagery online, or when a child reports imagery of themselves to us, we want to be able to tell victims their content will be found and blocked wherever it is being shared online.
As things are, we would simply not be able to do this within an end-to-end encrypted service.
The UK’s Online Safety Bill says companies must employ systems and processes to detect child sexual abuse material wherever it occurs.
This means even if a provider has an end-to-end encrypted platform, they will be required to utilise technology to identify child sexual abuse material.
If companies fail to protect their users in these environments, the Bill also enables Ofcom, the regulator, to take enforcement action that will require companies to comply in the form of Use of Technology Notices. The technology will be accredited by Ofcom and will have to meet minimum standards of accuracy.
Similarly, the European Commission has proposed a regulation which will combat the spread of child sexual abuse online.
The proposed legislation has a lot in common with the UK’s plans, and will see a shift from a voluntary detection regime to a mandatory one.
Mandatory Detection Orders will not discriminate where companies detect child sexual abuse material – meaning they will also need to detect child sexual abuse even if it is being shared in an end-to-end encrypted environment.
If companies do not comply, they will be issued an order requiring them to take action.
The IWF urges legislators to hold their nerve in retaining these provisions. We are at a pivotal point and it’s important not to take a step backward where safety is concerned.
Yes! Privacy and safety are not in any way incompatible and there are already examples of technologies which can be deployed to make sure known child sexual abuse materials cannot be shared in end-to-end encrypted environments.
While a service provider may lose the ability to “see” what is passing through their systems – they can still intervene at the point someone uploads content.
Automated tools to detect known images and videos of children suffering sexual abuse are already being used in encrypted spaces (if not end-to-end encrypted spaces).
It is done in a very focused, targeted way which does not compromise user privacy in any way.
Here’s how it works:
If a file is uploaded, it is turned into a unique stream of numbers in a digital code. Companies can compare that code to the codes generated by known images and videos of child sexual abuse. This process is known as hashing.
The process is fully automatic, and services do not take any action unless the file being uploaded is something a human analyst has previously seen and found to contain child sexual abuse imagery.
This principle will also work in end-to-end encrypted environments.
When files are uploaded to an end-to-end encrypted platform, there is no reason it cannot be checked and blocked if it matches the digital fingerprint or “hash” of known child sexual abuse imagery.
It would be autonomous, and non-intrusive. It would be a feature which would actually make the platform safer for users.
Technologies like anti-malware and anti-phishing software already work this way to protect users in a non-intrusive way which does not compromise user privacy.
The IWF would support industry in adopting technology which can continue to detect and block known child sexual abuse imagery and keep children safe without compromising users’ privacy.
The IWF urges companies who are concerned to work with organisations like the IWF to find a solution. There are practical and effective ways which can be deployed now to keep children safe and protect individuals’ privacy.
No! The software already available is from bona fide safety tech companies. However, tech companies who use end-to-end encryption also have the financial and engineering skills to develop their own technology to support this work themselves. It’s a policy choice as to whether they decide to do this.
No-one is suggesting that privacy, or encryption should be compromised. Only that services to UK citizens should not allow sexual abusers to trade and share child sexual abuse images and videos.
The National Crime Agency suggests that the UK has up to 850,000 people who are a sexual threat to children. The UK has, sadly, a ready made ‘market place’ of people who want to watch images and videos of children being raped and sexually abused.
To play a positive role in UK society, messaging platforms could develop and deploy software to detect and prevent this criminal imagery from entering their messaging platforms. It’s the right thing to do, and IWF can help them do it.
No! Technologies already exist which do not compromise privacy, or the end-to-end encryption. They merely detect known child sexual abuse imagery and could prevent it from being shared and traded within the messaging app.
Companies have the engineering and financial means to build their own software capable of doing this, as well as the option of using already-proven safety tech. It’s a policy choice as to whether they do that or not.
The IWF is the largest hotline in Europe dedicated to finding and removing child sexual abuse material from the internet.
Contact: Josh Thomas, Press Manager [email protected] +44 (0) 7377 727058
Parents and carers are encouraged to T.A.L.K to their children about the dangers.
We make the internet a safer place. We help victims of child sexual abuse worldwide by identifying and removing online images and videos of their abuse. We search for child sexual abuse images and videos and offer a place for the public to report them anonymously. We then have them removed. We’re a not for profit organisation and are supported by the global internet industry.
The IWF is part of the UK Safer Internet Centre, working with Childnet International and the South West Grid for Learning to promote the safe and responsible use of technology.
The IWF works globally to stop child sexual abuse imagery on the internet. If you ever stumble across a sexual image or video of someone you think is under 18, please report to the IWF. Reporting can be done anonymously and confidentially – we don’t need your details, just your help.