Digital fingerprints of a million images of child sexual abuse have been created, the Internet Watch Foundation (IWF) has said.
A leading children's charity is calling on Prime Minister Rishi Sunak to tackle AI-generated child sexual abuse imagery, when the UK hosts the first global summit on AI safety this autumn.
Sexual predators are grooming children under six into performing “disturbing” acts of sexual abuse via phones or webcams a charity has warned.
A leading child protection organisation has warned that abuse of AI technology threatens to "overwhelm" the internet.
A new campaign warning children of the dangers of sharing sexually explicit images and videos has been launched, with an appeal for parents and young people to openly discuss these issues.
Young children are now more exposed to being groomed online due to a reliance on tech devices in lockdown, a charity has claimed.
After years of ignoring pleas to sign up to child protection schemes, the controversial messaging app Telegram has agreed to work with an internationally recognised body to stop the spread of child sexual abuse material (CSAM).
In a review of material posted on the dark web, the Internet Watch Foundation found that deepfakes featuring children were becoming more extreme.
The tools used to create the images remain legal in the UK, the Internet Watch Foundation says, even though AI child sexual abuse images are illegal.
The girl sent a photo to a boy in her class before the image and her phone number were added to all-male online chat groups - she later started disappearing before being abused by "unknown men".
"Law enforcement cannot arrest its way out of this problem."
Sextortion is a form of blackmail in which a child is tricked into sending sexual images of themselves to abusers, who then threaten to share the pictures with friends, family or more widely on the internet if they are not paid money.