Campaigners are warning teenagers and their parents about online grooming and sexual exploitation as schools break up for the summer.
The European Parliament is taking a decisive stand against the rise of AI-generated child sexual abuse material (AI-CSAM), co-hosting a high-level briefing with the Internet Watch Foundation (IWF) to address this urgent threat. With a 380% increase in AI-CSAM reports in 2024, the Parliament is pushing for robust legal reforms through the proposed Child Sexual Abuse Directive. Key priorities include criminalising all forms of AI-generated CSAM, removing legal loopholes such as the “personal use” exemption, and enhancing cross-border enforcement. The IWF and the European Child Sexual Abuse Legislation Advocacy Group (ECLAG) urge the Council of the EU to align with Parliament’s strong stance to protect children and support survivors. This article highlights the scale of the threat, the evolving technology behind synthetic abuse imagery, and the critical need for updated EU legislation.
The Internet Watch Foundation is pleased to be among the winners of the Digital Communication Awards 2021.
New IWF data reveals a startling increase in ‘self-generated’ material where children have been tricked or groomed by predators.
Internet Watch Foundation calls for partnership ahead of landmark Vatican conference.
Learn how IWF assesses and categorises imagery to create hashes that help prevent the spread of child sexual abuse content online.
Explore how IWF identifies and addresses non-photographic child sexual abuse imagery, including drawings and CGI, under UK legislation.
“Imagine your darkest moments exposed to an unknown number of people. Then imagine strangers watching your pain for sexual satisfaction. That’s what happens for some of the children whose abuse images we see online."
Cambridgeshire mum Lillian* has one of the most unusual and, sometimes, harrowing jobs in the world.