Online Harms Debate
Full Debate: Read Full DebateJudith Cummins
Main Page: Judith Cummins (Labour - Bradford South)Department Debates - View all Judith Cummins's debates with the Department for Science, Innovation & Technology
(1 day, 14 hours ago)
Commons ChamberI call Ian Sollom, who will speak for up to 15 minutes.
Ian Sollom
The text of the motion asks for a review, and that is certainly what I want to see.
I have not come here today to stir up panic or to imply that the wellbeing of our children, or indeed our adults, is doomed. There is hope and we should not have to accept harm as a reality of life on the internet. As the Molly Rose Foundation chief executive officer, Andy Burrows, noted this week after campaigning pushed both TikTok and Meta to row back on plans for end-to-end encryption in direct messaging,
“tech firms are not immune to pressure”.
However, pressure on its own is not enough. The Government must urgently look at strengthening the Online Safety Act to ensure that pressure has robust legislative backing behind it, and that Ofcom actually has the power to enforce the regulations that will protect us all from harm.
Online harm comes in three forms. First, there is harmful content: the outright illegal and the extreme, posted and peddled by bad actors across social media platforms. Then we have harmful interactions with bad actors, including grooming, cyber-bullying and extortion. I am sure that Members across the House will share many stories of the impact of both types of harm today; it is a tragedy just how many there are. I want to focus on the third form of online harm, which is the harm that arises from not just the type of content encountered online, but the intensity with which it is repeatedly pushed on to young people by the platforms themselves.
This week, I was pleased to participate in the Royal Society pairing scheme. I was paired with Doctor Lizzy Winstone, a researcher from the University of Bristol whose work focuses on how young people use social media and its impact on their mental health. Her most recent research investigates the algorithmic recommendation of content as one of the primary mechanisms that shapes young people’s digital mental health. She and others have found that a large part of online harm is structural, arising from not just individual bad actors, but business models designed at their very core to maximise attention and to profit from provocation.
Social media is built to be addictive. Hooking users in and keeping them engaged is at the very heart of almost every platform’s business model. Algorithmic models cause harm through both overtly harmful content and content that is harmless on the face of it. There are attention deficit harms caused by passive screen watching and health harms associated with an increasingly sedentary lifestyle. Higher social media use has been directly linked to shorter sleep duration and difficulties with sleep onset. Gambling harm is often overlooked, but a recent Guardian investigation found that Meta AI was pointing vulnerable social media users to illegal online casinos and even suggesting ways to bypass UK gambling safeguards. Regulation is clearly not keeping pace with the evolving digital landscape.
Often, it is the directly harmful, even illegal, content that is caught up in these algorithms. The shock, disgust and strong emotion inevitably caused by this content creates engagement: we watch for longer, we engage more, and the algorithm takes this as permission to show us even more of it to keep us hooked. Endless scrolling functionalities allow already vulnerable users to fall into a world where there is no escape from this cycle. Members will be aware that we Liberal Democrats have long called for platforms to implement built-in caps on social media doomscrolling.
In 2017, it was concluded for the first time ever that content on social media had contributed to the death of a young person when teenager Molly Russell tragically took her own life. Before she died, she had viewed thousands of suicide and self-harm videos and images on Pinterest and Instagram, some of which were pushed to her without her asking to see them. The word used by the coroner was that Molly was able—even encouraged by platforms—to “binge” this content.
The normalisation of these recommendation mechanisms has created an awful, self-perpetuating cycle. One case study from the University of Bristol described a 17-year-old girl who was forcing herself to repeatedly watch graphic content of a gory accident on TikTok to try to desensitise herself to violence. She knew that she would be regularly exposed to this kind of content online and wanted to train herself to be able to watch it and not feel sick. We can only assume that due to her increased attention, she was shown even more of this horrific content.
Recommendation systems in and of themselves are no bad thing. They create a personalised space to explore interests and sometimes do filter out content that a user has no interest in. The problem is that a user’s engagement with content does not always indicate their actual interest in it. Another young person from the University of Bristol study—a trans man—described feeling compelled to intervene in homophobic and transphobic comments sections, to try to support his community and challenge prejudice. He was understood by the platform to have engaged, and subsequently he was bombarded with more and more of the same hateful content. The tension between knowing that his algorithm would register his intervention as interest and wanting to actively challenge hateful views was a constant source of stress online.
Problems also arise from a lack of transparency. Not only are social media platforms under no obligation to publish their algorithms, but with AI increasingly being used to build and continually iterate these algorithms, the platforms themselves are often unaware of the exact mechanisms that shape experience. Harm is occurring as a result of an unaccountable black box. Young people are not entirely passive in this system—they know it is happening—but platform tools provide very limited control over what the algorithm continues to recommend.
Looking at Ofcom’s summary of the protection of children codes of practice, we can see how a weak interpretation of the Online Safety Act is allowing such harm to be perpetuated. Volume 4, section 17 says that platforms must
“Ensure content recommender systems are designed and operated so that content indicated potentially to be PPC”—
primary priority content, which is suicide, self-harm, eating disorders and mental health content—
“is excluded from the recommender feeds of children”.
Research shows that children were most likely to report having seen harmful content through feeds with recommender systems—very few actively seek it out—so the intention behind this measure seems good. But then we see that it applies only to “child-accessible” parts of a service that are
“medium or high risk for one or more specific kinds of PPC”.
In Ofcom’s December review, not a single social media platform rated itself high risk for suicide or self-harm content. There is a clear gap between the intention of the legislation and how it is being implemented. That is because the Online Safety Act and its codes are ultimately built around compliance and not harm reduction. Rules-based legislation means that platforms can happily meet their legal duties if measures in the codes are followed, and they are under no obligation to effectively and proactively address the harms identified in their risk assessments. Putting only a moral duty on platforms to protect young people from harm is not going to work—we have seen for years that it does not work.
How can we expect the very same platforms that have been shown to deliberately and knowingly peddle harmful content to young people to essentially police themselves? Why would they bother when it is so much more profitable to tick already loosely defined boxes? A full review of the current legislation must investigate the barriers that Ofcom says are preventing it from delivering on the intentions of Parliament. That includes the safe harbour principle, which allows platforms to claim compliance and skirt enforcement action on harms about which they are already aware, and the complete lack of any obligation in the Act that platforms take active steps to reduce the risk of harm to users. In practice, that means that a platform can follow Ofcom’s codes to the letter, even while its own risk assessment shows that it is aware of serious ongoing harm, and face no enforcement consequences.
Amendments could be passed within months to introduce the robust, risk-based minimum age limits that we Liberal Democrats have been calling for. Minimum joining ages should be determined by a platform-specific assessment of age appropriateness in risk. That will incentivise the market to adopt lower-risk functionalities if platforms wish to open themselves to a wider pool of users.
We could argue that a review of sorts has already taken place: every coroner’s report, every tragic story told in the Chamber and every investigation by charities and organisations make up that review. The evidence is plainly there, but the harm is being allowed to continue. We are here as Members of Parliament to scrutinise, and we have done that. There have been 12 debates with the words “online safety” in the title this Parliament and there have been hundreds of references to “online harm”, yet there has been little indication that the Government are addressing the core issues raised in this debate.
I hope that Members will use this debate to raise the full range of harms we hear about in our work. I ask the Minister to respond specifically to these questions: will the Government examine whether the safe harbour principle is serving Parliament’s original intentions or has become a mechanism that platforms use to avoid accountability for harms about which they are already aware? Will the Government commit to ensuring that any new legislation this Parliament brings forward is built around harm reduction and not compliance?
I will now introduce a time limit of six minutes.