AI-Generated Child Abuse Sexual Imagery Threatens to “Overwhelm” Internet
AI-generated child sexual abuse is on the agenda at the White House as Internet Watch Foundation CEO Susie Hargreaves flies to Washington to discuss how to address the rising threat.
A unique safety tech tool which uses machine learning in real-time to detect child sexual abuse images and videos is to be developed by a collaboration of EU and UK experts.
Watch the recording of our 2022 Annual Report Launch.
The capacity for horrific images of AI-generated child sexual abuse to be reproduced at scale was underlined by IWF in the lead-up to the UK government’s AI Safety Summit.
Wednesday’s hearing brings into sharp focus the problems that organisations like ours, the Internet Watch Foundation, are dealing with every day.
The world’s leading independent open source generative AI company Stability AI, has partnered with the Internet Watch Foundation to tackle the creation of AI generated child sexual abuse imagery online.
IWF analysts say ‘insidious’ commercial child sexual abuse sites are driving more and more extreme content online.
New IWF data shows that three in every five child sexual abuse reports are hosted in an EU member state.
The IWF and NSPCC say tech platforms must do more to protect children online as confirmed sextortion cases soar.
Watch the video of IWF's Annual Report 2023.
A new IWF portal will, for the first time, give people in Tunisia a safe and anonymous place to report illegal videos and images.