OFCOM (Duty regarding Prevention of Serious Self-harm and Suicide) Bill [HL] Debate

Full Debate: Read Full Debate
Department: Department for Digital, Culture, Media & Sport
Moved by
Baroness Finlay of Llandaff Portrait Baroness Finlay of Llandaff
- View Speech - Hansard - -

That the Bill be now read a second time.

Baroness Finlay of Llandaff Portrait Baroness Finlay of Llandaff (CB)
- Hansard - -

My Lords, I am most grateful to the Samaritans for all its help with this Bill, and to Papyrus, YoungMinds, the Mental Health Foundation, the British Psychological Society, If U Care Share and others for their support. I am also grateful to the Library for updating its full briefing.

The original Second Reading of this Bill was cancelled due to the sad death of Her Majesty the Queen. It now falls between Second Reading and Committee of the Government’s Online Safety Bill. In the spirit of co-operation called for by the noble Lord, Lord Stevenson of Balmacara, on Wednesday evening, I hope today’s debate will help identify how the principle of my Bill could improve the Online Safety Bill. My Bill would create a duty on Ofcom that complements the Online Safety Bill. In practice, this means that Ofcom would need to assess how prevalent self-harm and suicide content is online, and whether the legislative regime is well-equipped to protect individuals from being exposed to and fed excessively harmful content.

Why did I table this Bill? In 2021, 5,583 people in England and Wales took their own lives. Suicide is complex, rarely caused by one thing and cuts across all age groups. A University of Bristol study found that participants with severe suicidal thoughts actively used the internet to research an effective method, and often found clear suggestions. We must recognise that the smaller platforms—not just category 1 or 2A platforms—have some of the most explicit and harmful details.

Self-harm signals serious emotional distress and is a strong risk factor for future suicide, although fortunately most people who self-harm will not go on to take their own life. For 20 years, self-harm rates have increased, particularly among young people, and have more than doubled in England since the turn of the millennium. Among those surveyed by Samaritans, three-quarters had harmed themselves more severely after viewing self-harm content online. Some 78% of people with lived experience of suicidality and self-harm want new laws to make online spaces safer. The internet can be an invaluable space for individuals to access support and to express difficult feelings, but its algorithms can also barrage people with content that encourages or exacerbates self-harm and suicidal behaviours.

The Law Commission’s 2021 report on modernising communications recognised the need to tackle “legal but harmful”. The Online Safety Bill as now written contains two cliff edges: one is the chronological age of 18; the other is the point that content is defined as illegal. The latter is not as easy as it might seem. Section 59 of the Coroners and Justice Act 2009 states that a person commits an offence if they intentionally undertake an act

“capable of encouraging or assisting the suicide of attempted suicide of another person”,

yet no prosecution from online advancement has been brought. Does it relate to the burden of proof required?

In the gap between these two cliff edges of age and illegality sits the thorny issue of “legal but harmful”. My Bill would require Ofcom to establish a unit to advise government on the extent to which social media platforms encourage self-harm or suicide, advise on the effectiveness of current regulations and make recommendations. This would support suicide prevention strategies across public health and education.

Last summer, we heard about ligature challenges so harmful that youngsters died or were brain damaged. Now, the virtual reality environment, the metaverse, simulates a real-world arena for practising offending behind closed doors—a pathway to real-life abuse.

Clause 2 recognises that people react in different ways to what they find online, so what is harmful to one person is not harmful to another. What matters is whether the information is posted or sent with malicious intent, without reasonable excuse. What can be the justification for flooding people with ever more violent, disturbing images, other than profit? No one can pretend that that is providing support.

The Government’s decision to remove regulation of legal but extremely harmful content is a backward step, given that susceptibility to harm does not end when people reach the age of 18. This will leave huge amounts of dangerous content widely available of instruction on methods, and pushed content, portraying and romanticising self-harm and suicide as positive and desirable. New research commissioned by the Samaritans found that the Government’s removal of protection of over-18s from damaging content goes directly against what the public want. Four in five—83%—agree that harmful suicide and self-harm content can have a damaging effect on adults, not just children. Less than one in six think that access should be restricted only for children. Removing the regulation of legal but extremely harmful content means that platforms will not need to consider risk to adult users or victims. Although platforms will need to provide empowerment tools for such content, these will not protect the vulnerable users who are already drawn to or sucked into damaging content.

The creation of the new offence of encouragement or assistance of serious self-harm should be introduced in time to be listed as priority legal content within the Online Safety Bill. It needs to be drafted narrowly, so that at-risk individuals and charities providing self-harm services are not criminalised. As the noble Lord, Lord Sarfraz, said at Second Reading of the Online Safety Bill,

“we cannot always play catch-up with technology.”—[Official Report, 1/2/23; col. 762.]

Technologies are emerging faster than we can imagine and can assist in plugging the gap of so-called legal but harmful. It will be the only way to make the internet safer, rather than a playing field for those of mal-intent who profit from exploiting the vulnerabilities of people.

We need completely different approaches from those of film or television classification because material is constantly being posted on the internet, and no human being can keep up with that. Generic approaches must set standards against which monitoring can occur so that risk of harm is minimised. That will involve engaging with highly sophisticated techniques in artificial intelligence, not crude algorithms, while accepting that artificial intelligence will make mistakes just as humans do, and that the accuracy depends on the way that screening mechanisms are trained.

In preparing for the Bill I asked the question: “How could AI filter out harmful content on the internet?” I got the reply that AI can filter out harmful content by using various techniques, such as natural language processing, image recognition, video analysis and machine learning. With this came the statement that

“it is important to note that A I is not perfect and can still make mistakes. It is crucial to have human oversight and review of AI generated results to ensure the accuracy and fairness of content filtering.”

I then asked: “How accurate is AI? Could it accidentally remove content that is not harmful?”, to which I received the response that the accidental removal of content that is not harmful can happen for several reasons, including bias in training data, ambiguous content and false positives. As well as needing human oversight, I was told that:

“It is also important to continually evaluate and improve AI models to reduce the risk of mistakes.”


It was an AI chatbot that gave me those answers, in seconds.

I also asked the site to write a short speech about my Bill. The result would have been rather good for a school debate—I fear that some of your Lordships might even have thought it better than my speech today. Yesterday’s science fiction is here today. I beg to move.

--- Later in debate ---
Baroness Finlay of Llandaff Portrait Baroness Finlay of Llandaff (CB)
- View Speech - Hansard - -

My Lords, I am extremely grateful to everyone who has spoken today. I am most grateful to the Minister for stressing that he is keeping an open mind and has an open door. Of course, a Private Member’s Bill should not conflict in any way with a really major piece of legislation. It has been clear that we all want the same thing: we want to make things safer, not less safe.

I am particularly grateful to the noble Baroness, Lady Smith of Newnham, for having shared with us the real issue of addiction that is behind so many of the behaviours that become harmful and the behaviours that capture people in extremely destructive behaviour. It is that addiction cycling the brain, born out of childhood trauma, that she illustrated to us so powerfully.

I am also grateful to all who have paid tribute to the parents who, in their pain, have had the courage to say, “We must do something.” They have been named in this Chamber.

The noble Baroness, Lady Blower, with her extensive awareness of education, has rightly highlighted how it is actually the young who move forward. The noble Baroness, Lady Merron, has pointed out that the data does not stop at 18; the tragedies carry on. As has also been pointed out by the noble Baroness, Lady Benjamin, it is students who kill themselves as well. Every university dreads the phone call that one of its students has killed themselves, and every university dreads discovering what it had missed in the antecedence to that disaster.

My noble friend Lady Grey-Thompson pointed out the important work that has come out of Swansea showing how viewing content really escalates the desire to self-harm; it is that hooking in that comes in. I am grateful to the noble Lord, Lord Balfe, for suggesting the wording of “apparent malicious content”, because of course there are people out there of malintent, and they will always make some nice wriggly excuse as to why what they are doing is not really harming anyone else.

Before I came into this debate, I had a call with my noble friend Lady Kidron about what is emerging about the metaverse. It is beyond anything that any of us have imagined; it is unbelievably harmful. As the noble Lord, Lord Clement-Jones, said, we must not be playing catch-up. It is the metaverse that will present the greatest threat, because it plays on mental distortion to expand it, and that increases the mental harms to everyone.

I am really grateful that we had this debate today, and I think it was timely that it came in between Second Reading and Committee on the Online Safety Bill. I assure the Minister that I and my noble friends within this Chamber on all Benches will be beating a path to his open door. I do not think he is going to be able to close it, and in fact he will not be able to lock it because we will just break it down. We need to move this forward and get it right. I beg to move.

Bill read a second time and committed to a Committee of the Whole House.