The Internet Watch Foundation (IWF) revealed in its 2025 annual report, published this April, that it received 245 reports of AI-generated child sexual abuse imagery that broke UK law in 2024 – an increase of 380% on the 51 seen in 2023, comprising 7,644 images and a small number of videos. The largest proportion of those images was “category A” material, meaning the most extreme type of child sexual abuse content that includes penetrative sexual activity or sadism, accounting for 39% of the actionable AI material seen by the IWF. The IWF also reported record levels of webpages hosting child sexual abuse imagery in 2024, with 291,273 reports of child sexual abuse imagery last year, an increase of 6% on 2023. Furthermore, according to data published by the NSPCC in November 2024, 19% of children aged 10-15-years-old exchanged messages with someone online who they had never met before in the last year; and over 9,000 child sexual abuse offences involved an online element in 2022/23.
In April 2025, Ofcom announced new rules for tech firms to keep children safe online. Social media and other internet platforms will be legally required to block children’s access to harmful content from 25th July or face large fines – and in extreme cases being shut down – under the UK’s Online Safety Act. Ofcom published over 40 measures covering sites and apps used by children, ranging from social media to search and gaming. Under the measures, the “riskiest” services must use “highly effective” age checks to identify under-18 users; algorithms, which recommend content to users, must filter out harmful material; all sites and apps must have procedures for taking down dangerous content quickly; and children must have a “straightforward” way to report content. Separately, the IWF has announced that it is making a new safety tool, called Image Intercept, available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms.
Meanwhile, Secretary of State for Science, Innovation and Technology, Peter Kyle, has said he that he is considering a social media curfew for children after TikTok’s introduction of a feature that encourages under-16s to switch off the app after 10pm. Kyle has also insisted that the Online Safety Act will not be a bargaining chip in any negotiations with the Trump administration over the threat of tariffs being imposed on British exports to the US, following criticism of the act, on free speech grounds, by US Vice-President, JD Vance.
Mark Jones, a partner at the law firm Payne Hicks Beach, argues that the new Ofcom regulations mark a “considerable sea change” in dealing with illegal or harmful content as they require tech companies to be proactive in identifying and removing dangerous material. However, online safety campaigner, Ian Russell, has said that the codes were “overly cautious” and put tech company profit ahead of tackling harmful content, stating: “I am dismayed by the lack of ambition in today’s codes. Instead of moving fast to fix things, the painful reality is that Ofcom’s measures will fail to prevent more young deaths like my daughter Molly’s.” Russell’s Molly Rose Foundation charity argues the codes do not go far enough to moderate suicide and self-harm content as well as blocking dangerous online challenges. The Children’s Commissioner for England, Rachel de Souza, has also criticised the measures and accused Ofcom of prioritising tech companies’ business interests over children’s safety; while the NSPCC wants tougher measures on strongly encrypted messaging services such as WhatsApp, although it describes the measures as a “major step forward”.
This symposium will provide stakeholders, including children’s services, schools, police, central and local government agencies, with an invaluable opportunity to review regulations, legislation and government policy relating to online child safety and to discuss options for better protecting children and reducing risks online. It will also enable delegates to formulate collaborative measures to support victims of online abuse and raise levels of digital literacy to help children thrive online.
Programme
- Learn about and assess Ofcom’s new safety measures for protecting children online and evaluate how they could be strengthened
- Examine UK government policy relating to the protection of children online and develop a comprehensive national strategy for child online safety
- Evaluate the robustness of the Online Safety Act in protecting children and explore avenues for improvement
- Exchange views on whether rules placing restrictions on social media use by children should be introduced
- Promote the role that schools can play in teaching children how to stay safe online
- Discuss the role that of multi-agency cooperation in enforcing the duty of care for online platforms and the role that technology companies can play in strengthening online protections, monitoring and reporting
- Share best practice in promoting digital literacy and awareness of online risks, working collaboratively to protect children from online harms, and supporting young victims among parents and guardians, schools, children’s services, and the police
- Propose measures to strengthen the ability of law enforcement agencies to tackle perpetrators of online harm against children, increase the reporting of online abuse and harmful content with the police, and better support victims
To register for the briefing, please click here.