UK Parliament Commute Busy London

The Online Safety Act explained

For many years, the IWF has worked with a range of stakeholders, including politicians and campaigners, to advocate for strong regulations combating Child Sexual Abuse Material (CSAM) and improving safety online.

We welcomed the UK Online Safety Act as a pivotal opportunity in safeguarding children from the potential dangers and harms they may encounter in the digital world.

Regulation can help ensure children are consistently provided with age-appropriate experiences that prioritise their safety online. By holding tech companies accountable, it shifts the responsibility on to platforms to minimise harm and deliver more positive outcomes for children and young people. 

What is the Online Safety Act?

The UK Online Safety Act (2023) is a new law that introduces measures to protect children and adults online. The Act requires tech companies that provide user-to-user services or search engines to improve online safety by removing illegal content (which includes CSAM), addressing harmful material for children, and enforcing age limits for adult content.

Under this law, these companies are required to safeguard UK users by evaluating potential risks of harm and taking steps to address them. The new regulation applies to all relevant services with a significant number of UK users, or those targeting the UK market, irrespective of the companies’ locations.  

The Act officially became law in October 2023.

Ofcom, the independent communications regulator in the UK, has since been working to implement the new legislation. Ofcom has drafted specific steps that providers can take to meet their safety obligations through codes of practice. Under the new regulatory framework, Ofcom possess the power to assess and enforce compliance among service providers.

With the regulatory regime now in effect, for Ofcom “2025 is the year of action for services. Sites and apps must now act to better protect users online, especially children”.

 

Why is the Act important?

The Online Safety Act represents a crucial advancement in safeguarding children from online sexual abuse.  

In 2024, IWF’s team of analysts acted on over 290,000 web pages displaying this illegal content, with each page containing hundreds, if not thousands, of indecent images of children.

This is the most child sexual abuse webpages the IWF has ever discovered in its 29-year history and is a five per cent increase on the 275,650 webpages identified in 2023.

To tackle the circulation of CSAM online, the Act establishes safety duties for online services. Platforms are required to conduct risk assessments to evaluate the likelihood and impact of child sexual exploitation and abuse on their sites. These services must then take appropriate measures to mitigate identified risks, as well as actively identify and remove illegal content.

Under the Illegal Harms Codes, for the first time, platforms will be legally required to detect and remove known CSAM. The measures to tackle CSAM include deploying hash-matching technology to detect and remove child sexual abuse material, as well as detecting and removing content matching listed CSAM URLs.

The greater the risk on a service, the more measures and safeguards needed to keep users safe from harm and prevent grooming and exploitation of children. With the implementation of the Act, the sharing of CSAM should become significantly more difficult.

 

What is contained in the Online Safety Act?

The Act requires online user-to-user services and search services to take robust action against illegal content and activity. Platforms will be required to implement measures to reduce the risks their services are used for illegal offending. They will also need to put in place systems for removing illegal content when it does appear.

Companies that fail to comply with the new regulations face fines of up to £18 million or 10% of their qualifying worldwide revenue, whichever is greater. Criminal action may also be pursued against senior managers if their company does not ensure compliance with information requests from Ofcom. The regulator will also have the authority to hold companies and senior managers criminally liable if they fail to adhere to enforcement notices regarding specific child safety duties related to child sexual abuse and exploitation on their platforms.

 

Preventing illegal content

In December 2024, Ofcom published its first codes of practice and guidance for services to prevent illegal content on their platforms. For the first time, platforms will be legally required to detect and remove known child sexual abuse material and the IWF stands ready to support platforms to meet this obligation.

Search services will also have to take steps to reduce the risks if users encounter illegal content via their services. Companies with websites that are likely to be accessed by children need to take steps to protect children from harmful content (including pornography) and behaviour.

We expect to see a significant increase in the range of providers hash matching for known CSAM. For example, all file-storage and file-sharing services will have to undertake hash matching, regardless of size. This is a welcome step and will help to ensure the UK becomes the safest place to be a child online.

The Codes also include measures to tackle online grooming. For example, children’s profiles and locations will not be visible to other users, and non-connected accounts cannot send them direct messages.  

The deadline for platforms to assess the risk of illegal harms on their services is 16 March 2025. Subject to the Codes completing the Parliamentary process, from 17 March 2025, providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.

 

Private communications

The Act states that Ofcom may not recommend the use of proactive technology to analyse user-generated content communicated privately, or metadata relating to user-generated content communicated privately. This means that Ofcom cannot direct service providers to deploy proactive technology, such as the IWF’s services, in private communications.

Regarding end-to-end encrypted (E2EE) environments,  Ofcom has an outlined criteria as to when an E2EE service would be considered private, and in what circumstances it would be public.

 

New powers for Ofcom

Section 121 of the Act sets out Ofcom’s powers to require services to use accredited technology, including in private messaging, to tackle child sexual exploitation and abuse (as well as terrorism).

The section states that if Ofcom considers that it is necessary and proportionate to do so, the regulator may give a notice to a service provider. The notice requires the provider to use its “best endeavours to develop or source technology for use on or in relation to the service or part of the service”.  

This proactive approach is critical for the detection of CSAM, ensuring that safeguards are in place wherever there is a risk to children or potential exposure to CSAM. By enforcing the requirement for companies to use their best endeavours in detecting CSAM, Ofcom can help ensure that technology is effectively deployed to combat these threats.

It is crucial that Ofcom fully leverages its authority under Section 121 of the Act, compelling tech companies to take all necessary measures to detect, block, and prevent the upload, sharing, and storage of images and videos depicting child sexual abuse.

 

Age assurance

The Act requires that services hosting pornography or other harmful content implement 'age assurance' measures to prevent children from typically accessing such material.

Age assurance methods—such as age verification, age estimation, or a combination of both—must be ‘highly effective’ in accurately determining whether a user is a child.

On 16 January 2025, Ofcom published its  Age Assurance and Children’s Access Statement, which sets out the next steps for platforms that publish their own pornographic content (Part 5 services) and for user-to-user services (Part 3 services).

All services that allow pornography must implement highly effective age assurance to ensure that children are not normally able to access pornographic content by July 2025 at the latest.

 

Age-appropriate experiences

In Spring 2025, Ofcom will release its rules on age assurance for Part 3 services. The rules will determine whether social media platforms will be required to use age assurance measures to ensure that children under 13 do not access their services.

The Act requires social media companies to enforce their age limits consistently and protect their child users. Services with age restrictions will be required to specify in their terms of service what measures they use to prevent underage access and apply these terms consistently.

Services must also assess any risks to children from using their platforms and set appropriate age restrictions, ensuring that child users have age-appropriate experiences and are shielded from harmful content.

The new laws mean social media companies will have to say what technology they are using, if any, and apply these measures consistently. Companies can no longer say their service is for users above a certain age in their terms of service and do nothing to prevent younger children accessing it.

 

Enforcement of the Online Safety Act

To assess and monitor industry compliance with the illegal content risk assessment duties under the Act, Ofcom has launched an enforcement programme.

Certain large services, as well as small but risky sites, must submit illegal harms risk assessments to the regulator by 31 March.

Providers must determine how likely it is that users could encounter illegal content on their service, or, in the case of user-to-user services, how they could be used to commit or facilitate certain criminal offences. Providers must also make and keep a written record of their risk assessment, including details about how it was carried out and its findings.

Ofcom will scrutinise the compliance of sites and apps that may present particular risks of harm from illegal content due to their size or nature – for example because their users may risk encountering some of the most harmful forms of online content and conduct, including child sexual exploitation.

Providers are required, by law, to respond to any statutory request for information by Ofcom in an accurate, complete and timely way. If any platform does not provide a satisfactory response by the deadline, Ofcom will open investigations into individual service providers.

Ofcom has strong enforcement powers at its disposal, including being able to issue fines of up to 10% of turnover or £18m – whichever is greater – or to apply to a court to block a site in the UK in the most serious cases.

 

What is next for Online Safety Act?

As of March 2025, Ofcom has completed three major consultations on illegal harms, protection of children, and its enforcement powers respectively. It issued its illegal harms code in December 2024 and this, as required by the Act, was laid before Parliament.

It published its child access assessment guidance for Part 3 services and age assurance guidance for part 5 (pornography service) providers in January 2025 and its child protection code is due to follow in Spring 2025.

Ofcom has also launched an enforcement programme to assess industry compliance with the illegal harms codes. Certain large services, as well as small but risky sites, must submit illegal harms risk assessments to the regulator by 31 March.

The regulator has published consultations on its violence against women and girls (VAWG) guidance, information-gathering powers and its technology notice powers.

For more details of the timeline of the Act’s implementation, see Ofcom’s updated roadmap

 

At the IWF, we believe the Online Safety Act has the potential to transform child safety online.

Companies looking to ensure their platforms comply with the provisions set out in the UK Online Safety Act and contribute to making the internet a safer place for all can apply to join the Internet Watch Foundation as Members. 

Members can gain access to a range of cutting-edge datasets and alerts, to protect their platforms, brands and customers from known CSAM content, as well as early insights into threats and trends.  

Find out more here or contact our team directly at [email protected].