Our series of short podcasts feature exclusive discussions with leading experts and academics covering a wide variety of topics including tech, encryption, policy and how these impact the criminal circulation of child sexual imagery online.
Join the conversation on social media with the hashtag #InConversationIWF
In Conversation With Tegan Insoll, Head of Research at Suojellaan Lapsia, and Dan Sexton, Chief Technology Officer at the IWF.
New research shows online offenders are choosing end-to-end encrypted messaging apps to contact children and to spread child sexual abuse material amid renewed calls for Meta to rethink its planned roll out of end-to-end encryption on Facebook Messsenger.
This new episode from the Internet Watch Foundation (IWF) focuses on new research from Finnish child protection agency Suojellaan Lapsia which shows how offenders operate and the methods they use. The full report will be published in full on February 20.
In Conversation With IWF Senior Analyst ‘Rosa’ and Senior Campaigns & Communications Officer Angela Munoz Aroca
The increase in self-generated child sexual abuse content is alarming. In 2022, more than three quarters (78%) of the webpages IWF identified as containing child sexual abuse material were tech-enabled, ie created via smartphones or webcams without the offender being physically present in the room with the child.
As we release the Talk Trust Empower report, this episode delves into how children – many of them of primary school age – are groomed and extorted into producing self-generated imagery, how the IWF is working to raising awareness of the phenomenon and what can be done by parents and carers to help children navigate dangers online.
In Conversation With Thorn’s Head of Data Science Rebecca Portnoff and IWF Chief Technology Officer Dan Sexton
Incredibly realistic child sexual abuse imagery can be generated easily, swiftly and in vast amounts by offenders using Artificial Intelligence (AI) technology. Recently, Internet Watch Foundation (IWF) Analysts found more than 10,000 AI-generated images shared on just one dark web child abuse forum. In the case of almost 2,600 of these images, the depictions of sexual child abuse were indistinguishable from real abuse images.
This episode explores what needs to be done to try and control this explosion in harmful imagery online and how other AI or machine-learning tools could be used to counter the phenomenon. From responsible tech development, to deployment, and the content itself that the AI is being trained on – all this and more are discussed by Thorn’s Rebecca Portnoff, who has dedicated her career to building technology and driving initiatives to defend children from sexual abuse, and Dan Sexton who leads the development of world-leading technology to support the work of the IWF Hotline.
In Conversation With former IWF Head of Policy and Public Affairs Mike Tunks, and Natalia Greene, Principal Consultant in PA Consulting's Vulnerabilities account
As the Online Safety Bill becomes the Online Safety Act, the Internet Watch Foundation looks at what is next.
In this podcast, children’s online safety expert Natalia Greene and former IWF Head of Policy and Public Affairs Mike Tunks explain this landmark piece of legislation and the effect it may have on all our lives.
In Conversation With IWF Chief Technical Officer Dan Sexton and former IWF Head of Policy and Public Affairs Michael Tunks
This episode looks at how end-to-end encryption technology works and how its introduction to messaging apps could hinder detection of child sexual abuse imagery.
Speaking on the podcast, our Chief Technical Officer Dan Sexton, said: “It is very concerning for the IWF. Our mission, our vision, is a safer internet for all, and that the internet is free of child sexual abuse material. When a child reports content of themselves to us, we want to be able to say to those children, to those victims, that their content will be found, and it will be blocked across the internet. Right now, we can’t do that with end-to-end encrypted services. That is very concerning for us, and very concerning for those children.”
In Conversation With IWF Hotline Manager Tamsin McNally
Our analysts in the Hotline have discovered a disturbing new trend, what they’ve called iCAP sites or “invite child abuse pyramid” sites. These sites encourage users to share links to criminal child sexual abuse material, spamming social media platforms with them and increasing the risk of accidental exposure to this content by the public.
Our Hotline Manager Tamsin McNally shares more details about this trend and warns people to not click on links coming from unknown sources.
Tamsin McNally, Hotline Manager at the IWF, said: “We see the worst of the worst when it comes to child sexual abuse images online. We see images, videos, websites dedicated to hosting and selling this kind of material and websites that attempt to disguise this material. However this new trend is something very different. We call them iCap sites and is not something we’ve seen before.”
In Conversation with IWF Internet Content Analysts
Protecting children is at the heart of everything we do. Our team of expert analysts do one of the most difficult, yet crucial, jobs in the world - searching for and seeking the removal of online child sexual abuse imagery.
It’s a tough job. Our Analysts are amongst the best in the world. The children in the pictures are real. Their abuse and suffering is very real. Our experts never forget that.
We talk to our Analysts Emilia, Mabel and Peter about their everyday work and the impact it has in the lives of many children worldwide in our latest episode.
Wondering if you could do this job too? If you have a good heart and an analytical mind we want to hear from you. Find out more about job roles and how to apply at iwf.org.uk/careers
In Conversation with IWF Senior Analyst, Rosa
Data released by the IWF shows that almost 20,000 webpages identified by our team in the first half of 2022 included 'self-generated' child sexual abuse imagery of 7-to-10-year-old children - a 360% increase on the first half of 2020 when the UK entered its first Covid lockdown.
The rapid growth of this material, showing primary-aged children, is a social and digital emergency, which needs a focussed and sustained effort to combat it from the Government, tech industry, law enforcement, education and non-profit organisations.
In this episode, we talk to Rosa, one of our world-class analysts, about the actual images and videos the team see every day and what is happening to children in our homes.
Parents and carers are encouraged to T.A.L.K to their children about online dangers. Visit talk.iwf.org.uk for a parent-and-carer-friendly guide to preventing this type of abuse.
Our new podcast series offers quick dives into topical issues related to the global fight against online child sexual abuse images and videos and issues affecting child safety online.
In conversation with Professor Hany Farid and former IWF Head of Policy & Public Affairs Mike Tunks
“The introduction of end-to-end encryption technologies has led to a debate around the apparent dichotomy of good child safety and good general user privacy and security,” reads a new report by Dr Ian Levy and Crispin Robinson, respectively the technical heads of the UK’s National Cybersecurity Centre and GCHQ. The report made headlines last week after suggesting tech companies should move ahead with technology that scans for child abuse imagery essentially on users’ phones themselves.
Speaking exclusively to the IWF as part of our new podcast series, Professor Hany Farid, image analysis expert at the University of California, Berkeley, said the report made it clear that privacy does not have to come at the expense of child protection.
Our new podcast series offers quick dives into topical issues related to the global fight against online child sexual abuse images and videos and issues affecting child safety online.