Debates between Chi Onwurah and Natasha Irons during the 2024 Parliament

Online Harm: Child Protection

Debate between Chi Onwurah and Natasha Irons
Tuesday 24th February 2026

(1 week, 2 days ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Chi Onwurah Portrait Dame Chi Onwurah
- Hansard - -

I really thank the hon. Member for that intervention, because that is exactly one of the recommendations of the Committee’s inquiry. As he says, the advertisement-based business models of most social media companies mean that they promote addictive content regardless of authenticity. This spills out across the entire internet via the unclear, under-regulated digital advertising market, incentivising the creation of content that will perform well on social media, as we saw during the 2024 unrest following the horrendous Southport attacks.

This is not just a social media problem, though. It is a systemic issue that promotes harmful content and undermines public trust. The Committee identified five key principles that we believe are crucial for building public trust. The first is public safety. Public safety matters; I hope it is not necessary to debate that. The second is free and safe expression, which is also very important. The third is responsibility on the part of the platforms. Right now, they have no legal responsibility for the content they amplify; they just have to follow their own processes in certain specific cases. Our fourth principle involves control, and the fifth and final principle is transparency. We made detailed recommendations on regulating the advertising-based business model so that amplification would not be incentivised in the way that was outlined by the hon. Member for Carshalton and Wallington (Bobby Dean). We also recommended a right to reset—the right of a person to remove their data from any algorithm.

Our report came out not long before the Minister took up his position. The Government accepted all our conclusions but none of our recommendations. I urge them to look again at our recommendations and to consider implementing them, or at least to respond and tell me why they are still not to be implemented. I welcome the Government’s recent actions and interventions and their readiness to intervene. As I said, the consultation is critical. I welcome the desire to promote a consensus and to take measures to ensure swift delivery of the consultation conclusions through the Children’s Wellbeing and Schools Bill. The consideration of the inclusion of AI chatbots is important, as is addressing the risky features in certain models, as well as providing support for bereaved parents. The Committee looks forward to working with the Government to try to achieve their aims. We need evidence to drive policy and regulation based on principles that the public can have confidence in.

Natasha Irons Portrait Natasha Irons
- Hansard - - - Excerpts

I wanted to intervene on the point about principles, content and responsibility. I worked for Channel 4 before I came to this place, and we were regulated by Ofcom. Channel 4 did not create its own content, but was responsible for the editorialisation of that content. It was beholden to certain standards. Does she agree that we should be holding these media companies—they are not now “new media” companies, but legacy media companies—just as responsible for the content they put out on their platforms as any broadcaster?

Chi Onwurah Portrait Dame Chi Onwurah
- Hansard - -

My hon. Friend makes an important point; the insight she brings from her career in the media is critical. For many years, while the platforms were just that—platforms on which other people placed content—there was an argument that they should not be regulated and that they did not have a responsibility for the content on them, but they are at the very least active curators of that content now. Algorithms effectively form digital twins of individuals and then drive individualised content at them. That requires a responsibility. The time is right, as our Committee recommended, to ensure that platforms have responsibility for their content.

The Science, Innovation and Technology Committee will be holding a one-off session on social media age restrictions on 11 March to feed into the Government’s consultation on measures to keep children safe online and to hear from social media companies on their progress in the last year. We will also gauge the strength of the evidence for and against an age-based ban on social media, as well as any evidence relating to proposed alternatives to a ban. In doing so, we will hear from experts and representatives of those with direct experience of harms. We want to hear from both sides of the debate in the UK and will be seeking evidence from Australia on the first few months of the ban that is already in force there. We will be hearing from major social media and technology companies in a follow-up to our algorithms and misinformation inquiry, and we will ask for their views on the proposed age limits.

Finally, the work on social media age restrictions will feed into a larger inquiry on the neuroscience of digital childhood, which we will launch in the coming weeks. We want to find out how young people spending their formative years online affects their brains and what the Government should do to protect them from any negative impact. That could cover the impact of social media and other screentime on brain development, behaviour, and physical and mental health, whether positive or negative. It could also cover the physiological impact on eye development, the impact on socialisation and what actions Governments should take. There is a consensus on the need to do something, but not on what needs to be done. That is why we are seeking to provide evidence.

I always say to the platform companies that the opposite of regulation is not no regulation, but bad regulation. More regulation is coming. Several US states, such as California, have brought in new regulation on big tech. The Spanish Prime Minister has called social media a

“failed state where laws are ignored and crimes are tolerated”.

There is also the increasingly significant issue of technology sovereignty and whether we are too dependent on foreign companies for our online environment. I call myself a tech evangelist, and I am, but I also know how much an engineer costs. The starting salary of an AI engineer—if companies can find one—is well over £100,000 a year. Tech companies are not going to put them to work on protecting and keeping our children safe unless the House puts the right incentives in place. With all due respect to the Minister and the Online Safety Act, which he inherited, they are not in place now.