New pilot shows way for smaller platforms to play big part in online safety
Even the smallest platforms can help prevent child abuse imagery online.
Published: Thu 1 Feb 2024
By Susie Hargreaves OBE, Internet Watch Foundation CEO
What we witnessed in the US congress on Wednesday was nothing short of extraordinary. Bipartisan agreement breaking out among senators, as one by one they waited in turn for their eight minutes to eviscerate and berate the social media executives they had hauled before them, some reluctantly.
The room was packed with family members who had all lost loved ones, in part due to the role social media had played in their children’s lives. Boardroom decisions that put profit before all else can have real life, devastating consequences for the young people who use these platforms. It is time that lawmakers provide the right kind of motivation for the changes that are so desperately needed.
What came across most?
Anger, frustration, and an overwhelming sense that social media companies have failed to protect children – the controls they have in place simply aren’t good enough.
We also learned political pressure works.
In the week leading into the hearing, all the major platforms made major child safety announcements:
Perhaps the most shocking moment in the hearing though, was provided courtesy of a line of questioning from Senator Ted Cruz, who asked Meta CEO Mark Zuckerberg why his organisation had displayed warning pages that told users they may be viewing child sexual abuse content and gave them either the option of getting help or continuing to view the content anyway.
An exasperated Senator Cruz said: “Mr Zuckerberg, what the hell were you thinking?”
Wednesday’s hearing brings into sharp focus the problems that organisations like ours, the Internet Watch Foundation (IWF), are dealing with every day.
A never-ending queue of reports, more and more child sexual abuse content, year on year, the problem growing in its complexity – partly owing to the decisions these companies are making.
At the IWF we believe that politicians must also take their share of responsibility too.
The US federal government has not passed any new online safety laws since the late 90s and early 00s.
At the hearing, senators skirted around key issues and failed to hold the executives before them to account for statements like: “we are industry leaders in the number of reports we refer to NCMEC,” when at the same time, that same executive is pursuing a policy of implementing end-to-end encryption which will see most of those reports disappear.
There was also little challenge to company executives who claim “we don’t allow children to do such-and-such on our platform” – well, how do you know they are children unless you have age verification in place?
And there was absolutely no mention of the challenges that generative AI already poses for law enforcement, in telling the difference between a real child that needs safeguarding and one that has been computer generated.
So, what needs to be done?
Well, the below three actions would be a good start:
As always, the takeaway lesson is that the law is failing to keep pace with technology.
But Wednesday’s hearing was encouraging, senators on a cross-party basis have lost patience with the situation and are urging action. Politics can drive change – now is the time to make those changes happen.
Even the smallest platforms can help prevent child abuse imagery online.