Read Bill Ministerial Extracts
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 9 months ago)
Lords ChamberMy Lords, I thank my noble friend Lady Kidron for her tenacious moral leadership on this issue. I remind noble Lords that, when we passed the Tobacco Advertising and Promotion Act, none of us predicted tobacco companies’ development and marketing of vapes with higher and more addictive nicotine content than that in cigarettes. It was a simple lesson.
A gap now in this Bill is the difficult issue of “legal but harmful”. We should not focus on the difficulty of defining this, but rather on the design and standards of algorithms that internet platforms use to commercial advantage, dodging any responsibility for what happens and blaming the end user.
Before the Government amended Clauses 12 and 13, category 1 service providers would have been forced to risk-assess across their sites and provide information on this in their terms of service, including how harmful content was to be managed. But this is now gone and as a result, the digital environment will not be detoxified as originally intended. What pressures, if any, were exerted on government by commercial and other sources to amend these clauses?
It matters that the Bill now treats people under 18 and over 18 very differently, because the brain’s development and peak addictive potential from puberty does not stop at 18. Those in their 20s are at particular risk.
The social media platforms act commercially, pushing out more content, including online challenges, as their algorithms pick up a keyword—whether spelled correctly or incorrectly—a mouse hovering over an image or a like response. Currently, platforms judge addiction and profit by the time spent on a platform, but that is not how addictions work. Addiction is the reward-reinforcing behaviour that evokes a chemical response in the brain that makes you want more. Hence the alcoholic, the gambling addict, the drug addict and so on keep going back for more; the sex addict requires ever more extreme images to gain stimulation; the user will not switch off access.
Those whose emotional expression is through abuse and violent behaviour find more ways to abuse to meet their urge to control and vent feelings, often when adverse childhood experiences were the antecedent to disastrous destructive behaviour. The unhappy young adult becomes hooked in by the images pushed to them after an internet search about depression, anorexia, suicidal ideation and so on. The algorithm-pushed images become compulsive viewing, as ever more are pushed out, unasked for and unsearched for, entrapping them into escalating harms.
Now, the duties in Clause 12 are too vague to protect wider society. The user should be required to opt in to content so that it can be followed, not opt out. The people controlling all this are the platform companies. They commission the algorithms that push content out. These could be written completely differently: they could push sources of support in response to searches for gambling, eating disorders, suicidal ideation, dangerously extreme sex and so on. Amending the Bill to avoid escalating harms is essential. Some of the harms are ones we have not yet imagined.
The platform companies are responsible for their algorithms. They must be made responsible for taking more a sophisticated, balanced-risk approach: the new technology of artificial intelligence could detect those users of their platforms who are at particular risk. In daily life offline, we weigh up risk, assessing harms and benefits in everything, filtering what we say or do. Risk assessment is part of life. That does not threaten freedom of speech, but it would allow “legal but harmful” to be addressed.
The Bill presents a fantastic opportunity. We must not throw it away.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 6 months ago)
Lords ChamberFar be it for me to suggest that all the amendments tabled by the noble Lord, Lord Moylan, are in the wrong place, but I think that Amendment 26 might have been better debated with the other amendments on age assurance.
On community moderation, I underscore the point that Ofcom must have a risk profile as part of its operations. When we get to that subject, let us understand what Ofcom intends to do with it—maybe we should instruct Ofcom a little about what we would like it to do with it for community moderation. I have a lot of sympathy—but do not think it is a get-out clause—with seeing some spaces as less risky, or, at least, for determining what risky looks like in online spaces, which is a different question. This issue belongs in the risk profile: it is not about taking things out; we have to build it into the Bill we have.
On age assurance and AV, I do not think that today is the day to discuss it in full. I disagree with the point that, because we are checking kids, we have to check ourselves—that is not where the technology is. Without descending into technical arguments, as the noble Lord, Lord Moylan, asked us not to, we will bring some of those issues forward.
The noble Lords, Lord Bethell and Lord Stevenson, and the right reverend Prelate the Bishop of Oxford have a package of amendments which are very widely supported across the Committee. They have put forward a schedule of age assurance that says what the rules of the road are. We must stop pretending that age assurance is something that is being invented now in this Bill. If you log into a website with your Facebook login, it shares your age—and that is used by 42% of people online. However, if you use an Apple login, it does not share your age, so I recommend using Apple—but, interestingly, it is harder to find that option on websites, because websites want to know your age.
So, first, we must not treat age assurance as if it has just been invented. Secondly, we need to start to have rules of the road, and ask what is acceptable, what is proportionate, and when we will have zero tolerance. Watching faces around the Committee, I say that I will accept zero tolerance for pornography and some other major subjects, but, for the most part, age assurance is something that we need to have regulated. Currently, it is being done to us rather than in any way that is transparent or agreed, and that is very problematic.
My Lords, I hesitated to speak to the previous group of amendments, but I want to speak in support of the issue of risk that my noble friend Lady Kidron raised again in this group of amendments. I do not believe that noble Lords in the Committee want to cut down the amount of information and the ability to obtain information online. Rather, we came to the Bill wanting to avoid some of the really terrible harms promoted by some websites which hook into people’s vulnerability to becoming addicted to extremely harmful behaviours, which are harmful not only to themselves but to other people and, in particular, to children, who have no voice at all. I also have a concern about vulnerable people over the age of 18, and that may be something we will come to later in our discussions on the Bill.
May I intervene, because I have also been named in the noble Lord’s response? My concern is about the most extreme, most violent, most harmful and destructive things. There are some terrible things posted online. You would not run an open meeting on how to mutilate a child, or how to stab somebody most effectively to do the most harm. It is at this extreme end that I cannot see anyone in society in the offline world promoting classes for any of these terrible activities. Therefore, there is a sense that exposure to these things is of no benefit but promotes intense harm. People who are particularly vulnerable at a formative age in their development should not be exposed to them, because they would not be exposed to them elsewhere. I am speaking personally, not for anybody else, but I stress that this is the level at which the tolerance should be set to zero because we set it to zero in the rest of our lives.
Everything the noble Baroness has said is absolutely right, and I completely agree with her. The point I simply want to make is that no form of risk-based assessment will achieve a zero-tolerance outcome, but—
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 6 months ago)
Lords ChamberAnd it has been highly contentious whether the right to vote gives them independence. For example, you would still be accused of child exploitation if you did anything to a person under 18 in Scotland or Wales. In fact, if you were to tap someone and it was seen as slapping in Scotland and they were 17, you would be in trouble. Anyway, it should not be in this Bill. That is my point.
My Lords, perhaps I may intervene briefly, because Scotland and Wales have already been mentioned. My perception of the Bill is that we are trying to build something fit for the future, and therefore we need some broad underlying principles. I remind the Committee that the Well-being of Future Generations Act (Wales) Act set a tone, and that tone has run through all aspects of society even more extensively than people imagined in protecting the next generation. As I have read them, these amendments set a tone to which I find it difficult to understand why anyone would object, given that that is a core principle, as I understood it, behind building in future-proofing that will protect children, among others.
My Lords, I support the amendments in the name of the noble Lord, Lord Russell, to require regulated services to have regard to the UN Convention on the Rights of the Child. As we continue to attempt to strengthen the Bill by ensuring that the UK will be the safest place for children to be online, there is a danger that platforms may take the easy way out in complying with the new legislation and just block children entirely from their sites. Services must not shut children out of digital spaces altogether to avoid compliance with the child safety duties, rather than designing services with their safety in mind. Children have rights and, as the UN convention makes clear, they must be treated according to their evolving capacities and in their best interests in consideration of their well-being.
Being online is now an essential right, not an option, to access education, entertainment and friendship, but we must try to ensure that it is a safe space. As the 5Rights Foundation points out, the Bill risks infringing children’s rights online, including their rights to information and participation in the digital world, by mandating that services prevent children from encountering harmful content, rather than ensuring services are made age appropriate for children and safe by design, as we discussed earlier. As risk assessments for adults have been stripped from the Bill, this has had the unintended consequence of rendering a child user relative to an adult user even more costly, as services will have substantial safety duties to comply with to protect children. 5Rights Foundation warns that this will lead services to determine that it is not worth designing services with children’s safety in mind but that it could be more cost effective to lock them out entirely.
Ofcom must have a duty to have regard for the UNCRC in its risk assessments. Amendment 196 would ensure that children’s rights are reflected in Ofcom’s assessment of risks, so that Ofcom must have regard for children’s rights in balancing their rights to be safe against their rights to access age-appropriate digital spaces. This would ensure compliance with general comment No. 25, as the noble Lord, Lord Russell, mentioned, passed in 2021, to protect children’s rights to freedom of expression and privacy. I urge the Ministers to accept these amendments to ensure that the UK will be not only the safest place for children to be online but the best place too, by respecting and protecting their rights.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 5 months ago)
Lords ChamberI am particularly grateful to the noble Lords who co-signed Amendments 96, 240 and 296 in this group. Amendment 225 is also important and warrants careful consideration, as it explicitly includes eating disorders. These amendments have strong support from Samaritans, which has helped me in drafting them, and from the Mental Health Foundation and the BMA. I declare that I am an elected member of the BMA ethics committee.
We have heard much in Committee about the need to protect children online more effectively even than in the Bill. On Tuesday the noble Baroness, Lady Morgan of Cotes, made a powerful speech acknowledging that vulnerability does not stop at the age of 18 and that the Bill currently creates a cliff edge whereby there is protection from harmful content for those under 18 but not for those over 18. The empowerment tools will be futile for those seriously contemplating suicide and self-harm. No one should underestimate the power of suicide contagion and the addictive nature of the content that is currently pushed out to people, goading them into such actions and drawing them into repeated viewings.
Amendment 96 seeks to redress that. It incorporates a stand-alone provision, creating a duty for providers of user-to-user services to manage harmful content about suicide or self-harm. This provision would operate as a specific category, relevant to all regulated services and applicable to both children and adults. Amendment 296 defines harmful suicide or self-harm content. It is important that we define that to avoid organisations such as Samaritans, which provide suicide prevention support, being inadvertently caught up in clumsy, simplistic search engine categorisation.
Suicide and self-harm content affects people of all ages. Adults in distress search the internet, and children easily bypass age-verification measures and parental controls even when the have been switched on. The Samaritans Lived Experience Panel reported that 82% of people who died by suicide, having visited websites that encouraged suicide and/or methods of self-harm, were over the age of 25.
Samaritans considers that the types of suicide and self-harm content that are legal but unequivocally harmful include, but are not limited to, information, depictions, instructions and advice on methods of self-harm and suicide; content that portrays self-harm and suicide as positive or desirable; and graphic descriptions or depictions of self-harm and suicide. As the Bill stands, platforms will not even need to consider the risk that such content could pose to adults. This will leave all that dangerous online content widely available and undermines the Bill’s intention from the outset.
Last month, other parliamentarians and I met Melanie, whose relative Jo died by suicide in 2020. He was just 23. He had accessed suicide-promoting content online, and his family are speaking out to ensure that the Bill works to avoid future tragedies. A University of Bristol study reported that those with severe suicidal thoughts actively use the internet to research effective methods and often find clear suggestions. Swansea University reported that three quarters of its research participants had harmed themselves more severely after viewing self-harm content online.
Amendment 240 complements the other amendments in this group, although it would not rely on them to be effective. It would establish a specific unit in Ofcom to monitor the prevalence of suicide, self-harm and harmful content online. I should declare that this is in line with the Private Member’s Bill I have introduced. In practice, that means that Ofcom would need to assess the efficacy of the legislation in practice. It would require Ofcom to investigate the content and the algorithms that push such content out to individuals at an alarming rate.
Researchers at the Center for Countering Digital Hate set up new accounts in the USA, UK, Canada and Australia at the minimum age TikTok allows, which is 13. These accounts paused briefly on videos about body image and mental health, and “liked” them. Within 2.6 minutes, TikTok recommended suicide content, and it sent content on eating disorders within eight minutes.
Ofcom’s responsibility for ongoing review and data collection, reported to Parliament, would take a future-facing approach covering new technologies. New communications and internet technologies are being developed at pace in ways we cannot imagine. The term
“in a way equivalent … to”
in Amendment 240 is specifically designed to include the metaverse, where interactions are instantaneous, virtual and able to incite, encourage or provoke serious harm to others.
We increasingly live our lives online. Social media is expanding, while user-to-user sites are now shopping platforms for over 70% of UK consumers. However, online is also being used to sell suicide kits or lethal substances, as recently covered in the press. It is important that someone holds the responsibility for reporting on dangers in the online world. Harmful suicide content methods and encouragement were found through a systematic review to be massed on sites with low levels of moderation and easy search functions for images. Some 78% of people with lived experience of suicidality and self-harm surveyed by Samaritans agree that new laws are needed to make online spaces safer.
I urge noble Lords to support my amendments, which aim to ensure that self-harm, suicide and seriously harmful content is addressed across all platforms in all categories as well as search engines, regardless of their functionality or reach, and for all persons, regardless of age. Polling by Samaritans has shown high support for this: four out of five agree that harmful suicide and self-harm content can damage adults as well as children, while three-quarters agree that tech companies should by law prevent such content being shown to users of all ages.
If the Government are not minded to adopt these amendments, can the Minister tell us specifically how the Bill will take a comprehensive approach to placing duties on all platforms to reduce dangerous content promoting suicide and self-harm? Can the Government confirm that smaller sites, such as forums that encourage suicide, will need to remove priority illegal content, whatever the level of detail in their risk assessment? Lastly—I will give the Minister a moment to note my questions—do the Government recognise that we need an amendment on Report to create a new offence of assisting or encouraging suicide and serious self-harm? I beg to move.
My Lords, I particularly support Amendment 96, to which I have added my name; it is a privilege to do so. I also support Amendment 296 and I cannot quite work out why I have not added my name to it, because I wholeheartedly agree with it, but I declare my support now.
I want to talk again about an issue that the noble Baroness, Lady Finlay, set out so well and that we also touched on last week, about the regulation of suicide and self-harm content. We have all heard of the tragic case of Molly Russell, but a name that is often forgotten in this discussion is Frankie Thomas. Frankie was a vulnerable teenager with childhood trauma, functioning autism and impulsivity. After reading a story about self-harm on the app Wattpad, according to the coroner’s inquest, she went home and undertook
“a similar act, resulting in her death”.
I do not need to repeat the many tragic examples that have already been shared in this House, but I want to reiterate the point already made by the BMA in its very helpful briefing on these amendments: viewing self-harm and suicide content online can severely harm the user offline. As I said last week when we were debating the user empowerment tools, this type of content literally has life or death repercussions. It is therefore essential that the Bill takes this sort of content more seriously and creates specific duties for services to adhere to.
We will, at some point this evening—I hope—come on to debate the next group of amendments. The question for Ministers to answer on this group, the next one and others that we will be debating is, where we know that content is harmful to society—to individuals but also to broader society—why the Government do not want to take the step of setting out how that content should be properly regulated. I think it all comes from their desire to draw a distinction between content that is illegal and content that is not illegal but is undoubtedly, in the eyes of pretty well every citizen, deeply harmful. As we have already heard from the noble Baroness, and as we heard last week, adults do not become immune to suicide and self-harm content the minute they turn 18. In fact, I would argue that no adult is immune to the negative effects of viewing this type of content online.
This amendment, therefore, is very important, as it would create a duty for providers of regulated user-to-user services and search engines to manage harmful suicide or self-harm content applicable to both children and adults, recognising this cliff edge otherwise in the Bill, which we have already talked about. I strongly urge noble Lords, particularly the Minister, to agree that protecting users from this content is one of the most important things that the Bill can do. People outside this House are looking to us to do this, so I urge the Government to support this amendment today.
My Lords, I am extremely grateful to everyone who has contributed to this debate. It has been a very rich debate, full of information; my notes have become extensive during it.
There are a few things that I would like to know more about: for example, how self-harm, which has been mentioned by the Minister, is being defined, given the debate we have had about how to define self-harm. I thought of self-harm as something that does lasting and potentially life-threatening damage. There are an awful lot of things that people do to themselves that others might not like them doing but that do not fall into that category. However, the point about suicide and serious self-harm is that when you are dead, that is irreversible. You cannot talk about healing, because the person has now disposed of their life, one way or another.
I am really grateful to the noble Baroness, Lady Healy, for highlighting how complex suicide is. Of course, one of the dangers with all that is on the internet is that the impulsive person gets caught up rapidly, so what would have been a short thought becomes an overwhelming action leading to their death.
Having listened to the previous debate, I certainly do not understand how Ofcom can have the flexibility to really know what is happening and how the terms of service are being implemented without a complaints system. I echo the really important phrase from the noble Lord, Lord Stevenson of Balmacara: if it is illegal in the real world, why are we leaving it on the internet?
Many times during our debates, the noble Baroness, Lady Kidron, has pushed safety by design. In many other things, we have defaults. My amendments were not trying to provide censorship but simply trying to provide a default, a safety stop, to stop things escalating, because we know that they are escalating at the moment. The noble Lord, Lord Stevenson of Balmacara, asked whether it was an amplification or a reach issue. I add, “or is it both?”. From all the evidence we have before us, it appears to be.
I am very grateful to the noble Lord, Lord Clement-Jones, for pressing that we must learn from experience and that user empowerment to switch off simply does not go far enough: people who are searching for this and already have suicidal ideation will not switch it off because they have started searching. There is no way that could be viewed as a safety feature in the Bill, and it concerns me.
Although I will withdraw my amendment today, of course, I really feel that we will have to return to this on Report. I would very much appreciate the wisdom of other noble Lords who know far more about working on the internet and all the other aspects than I do. I am begging for assistance in trying to get the amendments right. If not, the catalogue of deaths will mount up. This is literally a once-in-a-lifetime opportunity. For the moment, I beg leave to withdraw.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 5 months ago)
Lords ChamberMy Lords, I am very grateful to the noble Baroness, Lady Harding, for the way she introduced this group of amendments. I have added my name to Amendment 125 and have tabled probing Amendments 241 and 301 in an attempt to future-proof the Bill. As the noble Baroness has said, this is not the future but today, tomorrow and forever, going forwards.
I hope that there are no children in the Public Gallery, but from my position I cannot see.
There are some children in the Public Gallery.
Then I shall slightly modify some of the things I was going to say.
When this Bill was conceived, the online world was very different from how it is today. It is hard to imagine how it will look in the future. I am very grateful to the noble Baroness, Lady Berridge, and the Dawes Centre for Future Crime at UCL, for information that they have given to me. I am also grateful to my noble friend Lady Kidron, and the enforcement officers who have shared with us images which are so horrific that I wish that I had never seen them—but you cannot unsee what you have seen. I admire how they have kept going and maintained a moral compass in their work.
The metaverse is already disrupting the online world as we know it. By 2024, it is estimated that there will be 1.7 billion mobile augmented-reality user devices worldwide. More than one-fifth of five to 10 year-olds already have a virtual reality headset of their own, or have asked for similar technology as a gift. The AI models are also developing quickly. My Amendment 241 would require Ofcom to be alert to the ways in which emerging technologies allow for activities that are illegal in the real world to be carried out online, to identify the places where the law is not keeping up to date with technological developments.
The metaverse seems to have 10 attributes. It is multiuser and multipurpose, content is user-generated, it is immersive, and spatial interactions occur in virtual reality or have physical environments enhanced by augmented reality. Its digital aspects do not expire when the experience ends, and it is multiplatform and interoperable, as users move between platforms. Avatars are involved, and in the metaverse there is ownership of the avatars or other assets such as virtual property, cryptocurrency et cetera. These attributes allow it to be used to master training scenarios of complex situations, such as in surgical training for keyhole surgery, where it can improve accuracy rapidly. On the horizon are brain/computer interfaces, which may be very helpful in rehabilitative adaptation after severe neurological damage.
These developments have great potential. However, dangers arise when virtual and augmented reality devices are linked to such things as wearable haptic suits, which allow the user to feel interactions through physical sensation, and teledildonics, which are electronic devices that simulate sexual interaction.
With the development of deep-fake imagery, it is now possible for an individual to order a VR experience of abusing the image of a child whom they know. The computer-generated images are so realistic that they are almost impossible to distinguish from those that would be cartoon-generated. An avatar can sexually assault the avatar of a minor, and such an avatar of the minor can be personalised. Worryingly, there have been growing reports of these assaults and rapes happening. Since the intention of VR is to trick the human nervous system into experiencing perceptual and bodily reactions, while such a virtual assault may not involve physical touching, the psychological, neurological and emotional experience can be similar to a physical assault.
This fuels sex addiction and violence addiction, and is altering the offender pathway: once the offender has engaged with VR abuse material, there is no desire to go back to 2D material. Offenders report that they want more: in the case of VR, that would be moving to live abuse, as has been said. The time from the development of abnormal sexual desires to real offending is shortened as the offender seeks ever-increasing and diverse stimulation to achieve the same reward. Through Amendment 125, such content would be regarded as user-generated.
Under Amendment 241, Ofcom could suggest ways in which Parliament may want to update the current law on child pornography to catch such deep-fake imagery, as these problematic behaviours are illegal in the real world but do not appear to be illegal online or in the virtual world.
Difficulties also arise over aspects of terrorism. It is currently a criminal offence to attend a terrorist training ground. Can the Minister confirm that Amendment 136C, which we have debated and which will be moved in a later group, would make attending a virtual training ground illegal? How will Ofcom be placed to identify and close any loopholes?
The Dawes Centre for Future Crime has identified 31 unique crime threats or offences which are risks in the metaverse, particularly relating to child sexual abuse material, child grooming, investment scams, hate crime, harassment and radicalisation.
I hope the Minister can confirm that the Bill already applies to the metaverse, with its definition of user-to-user services and technology-neutral terminology, and that its broad definition of “encountering” includes experiencing content such as haptic suits or virtual or augmented reality through the technology-neutral expression “or other automated tool”. Can the Minister also confirm that the changes made in the other place in Clause 85 require providers of metaverse services to consider the level of risk of the service being used for the commission or facilitation of a priority offence?
The welcome addition to the Bill of a risk assessment duty, however, should be broadened to include offences which are not only priority offences. I ask the Minister: will the list of offences in Schedules 5 to 7 to the Bill be amended to include the option of adding to this list to cover other harmful offences such as sexual offences against adults, impersonation scams, and cyber physical attacks such as cyber burglary, which can lead to planned burglary, attacks on key infrastructure and assault?
The ability to expand the risk assessment criteria could future-proof the Bill against such offences by keeping the list open, rather than closed as it is at the moment, to other serious offences committed in user-to-user or combined service providers. Such duties should apply across all services, not only those in category 1, because the smaller platforms, which are not covered by empowerment duties, may present a particularly high risk of illegal content and harmful behaviours.
Can the Minister therefore please tell us how content that is illegal in the real world will be reported, and how complaints can be made when it is encountered, if it is not a listed priority offence in the Bill? Will the Government expand the scope to cover not only illegal content, as defined in Clauses 207 and 53, but complex activities and interactions that are possible in the metaverse? How will the list of priority offences be expanded? Will the Government amend the Bill to enable Ofcom to take a risk-based approach to identifying who becomes classified as a category 1 provider?
I could go on to list many other ways in which our current laws will struggle to remain relevant against the emerging technologies. The list’s length shows the need for Ofcom to be able to act and report on such areas—and that Parliament must be alive to the need to stay up to date.
My Lords, I am grateful to the noble Baroness, Lady Finlay of Llandaff, for tempering her remarks. On tempering speeches and things like that, I can inform noble Lords that the current school group have been escorted from the Chamber, and no further school groups will enter for the duration of the debate on this group of amendments.
My Lords, I apologise to my noble friend. I ask that we pause the debate to ask this school group to exit the Chamber. We do not think that the subject matter and content will be suitable for that audience. I am very sorry. The House is pausing.
In this moment while we pause, I congratulate the noble Lord, the Government Whip, for being so vigilant: some of us in the Chamber cannot see the whole Gallery. It is appreciated.
I, too, thank my noble friend the Government Whip. I apologise too if I have spoken out of discourtesy in the Committee: I was not sure whose name was on which amendment, so I will continue.
Physically, I am, of course, working in my home. If that behaviour had happened in the office, it would be an offence, an assault: “intentional or reckless application of unlawful force to another person”. It will not be an offence in the metaverse and it is probably not harassment because it is not a course of conduct.
Although the basic definition of user-to-user content covers the metaverse, as does encountering, as has been mentioned in relation to content under Clause 207, which is broad enough to cover the haptic suits, the restriction to illegal content could be problematic, as the metaverse is a complex of live interactions that mimics real life and such behaviours, including criminal ones. Also, the avatar of an adult could sexually assault the avatar of a child in the metaverse, and with haptic technologies this would not be just a virtual experience. Potentially even more fundamentally than Amendment 125, the Bill is premised on the internet being a solely virtual environment when it comes to content that can harm. But what I am seeking to outline is that conduct can also harm.
I recognise that we cannot catch everything in this Bill at this moment. This research is literally hot off the press; it is only a few weeks old. At the very least, it highlights the need for future-proofing. I am aware that some of the issues I have highlighted about the fundamental difference between conduct and content refer to clauses noble Lords may already have debated. However, I believe that these points are significant. It is just happenstance that the research came out and is hot off the press. I would be grateful if the Minister would meet the Dawes Centre urgently to consider whether there are further changes the Government need to make to the Bill to ensure that it covers the harms I have outlined.
This is a very interesting discussion; the noble Lord, Lord Knight, has hit on something really important. When somebody does an activity that we believe is criminal, we can interrogate them and ask how they came to do it and got to the conclusion that they did. The difficulty is that those of us who are not super-techy do not understand how you can interrogate a bot or an AI which appears to be out of control on how it got to the conclusion that it did. It may be drawing from lots of different places and there may be ownership of lots of different sources of information. I wonder whether that is why we are finding how this will be monitored in future so concerning. I am reassured that the noble Lord, Lord Knight of Weymouth, is nodding; does the Minister concur that this may be a looming problem for us?
I certainly concur that we should discuss the issue in greater detail. I am very happy to do so with the noble Lord, the noble Baroness and others who want to do so, along with officials. If we can bring some worked examples of what “in control” and “out of control” bots may be, that would be helpful.
I hope the points I have set out in relation to the other issues raised in this group and the amendments before us are satisfactory to noble Lords and that they will at this point be content not to press their amendments.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 4 months ago)
Lords ChamberMy Lords, I will address my remarks to government Amendment 268AZA and its consequential amendments. I rather hope that we will get some reassurance from the Minister on these amendments, about which I wrote to him just before the debate. I hope that that was helpful; it was meant to be constructive. I also had a helpful discussion with the noble Lord, Lord Allan.
As has already been said, the real question relates to the threshold and the point at which this measure will clock in. I am glad that the Government have recognised the importance of the dangers of encouraging or assisting serious self-harm. I am also grateful for the way in which they have defined it in the amendment, relating to it grievous bodily harm and severe injury. The amendment says that this also
“includes successive acts of self-harm which cumulatively reach that threshold”.
That is important; it means, rather than just one act, a series of them.
However, I have a question about subsection (10), which states that:
“A provider of an internet service by means of which a communication is sent, transmitted or published is not to be regarded as a person who sends, transmits or publishes it”.
We know from bereaved parents that algorithms have been set up which relay this ghastly, horrible and inciteful material that encourages and instructs. That is completely different from those organisations that are trying to provide support.
I am grateful to Samaritans for all its help with my Private Member’s Bill, and for the briefing that it provided in relation to this amendment. As it points out, over 5,500 people in England and Wales took their own lives in 2021 and self-harm is
“a strong risk factor for future suicide”.
Interestingly, two-thirds of those taking part in a Samaritans research project said that
“online forums and advice were helpful to them”.
It is important that there is clarity around providing support and not encouraging and goading people into activity which makes their self-harming worse and drags them down to eventually ending their own lives. Three-quarters of people who took part in that Samaritans research said that they had
“harmed themselves more severely after viewing self-harm content online”.
It is difficult to know exactly where this offence sits and whether it is sufficiently narrowly drawn.
I am grateful to the Minister for arranging for me to meet the Bill team to discuss this amendment. When I asked how it was going to work, I was somewhat concerned because, as far as I understand it, the mechanism is based on the Suicide Act, as amended, which talks about the offence of encouraging or assisting suicide. The problem as I see it is that, as far as I am aware, there has not been a string of prosecutions following the suicide of many young people. We have met their families and they have been absolutely clear about how their dead child or sibling—whether a child or a young adult—was goaded, pushed and prompted. I recently had experience outside of a similar situation, which fortunately did not result in a death.
The noble Lord, Lord Allan, has already addressed some of the issues around this, and I would not want the amendment not to be there because we must address this problem. However, if we are to have an offence here, with a threshold that the Government have tried to define, we must understand why, if assisting and encouraging suicide on the internet is already a criminal offence, nothing has happened and there have been no prosecutions.
Why is subsection (10) in there? It seems to negate the whole problem of forwarding on through dangerous algorithms content which is harmful. We know that a lot of the people who are mounting this are not in the UK, and therefore will be difficult to catch. It is the onward forwarding through algorithms that increases the volume of messaging to the vulnerable person and drives them further into the downward spiral that they find themselves in—which is perhaps why they originally went to the internet.
I look forward to hearing the Government’s response, and to hearing how this will work.
My Lords, this group relates to communications offences. I will speak in support of Amendment 265, tabled by the noble Lord, Lord Moylan, and in support of his opposition to Clause 160 standing part of the Bill. I also have concerns about Amendments 267AA and 267AB, in the name of the noble Baroness, Lady Kennedy. Having heard her explanation, perhaps she can come back and give clarification regarding some of my concerns.
On Clause 160 and the false communications offence, unlike the noble Lord, Lord Moylan, I want to focus on psychological harm and the challenge this poses for freedom of expression. I know we have debated it before but, in the context of the criminal law, it matters in a different way. It is worth us dwelling on at least some aspects of this.
The offence refers to what is described as causing
“non-trivial psychological or physical harm to a likely audience”.
As I understand it—maybe I want some clarity here—it is not necessary for the person sending the message to have intended to cause harm, yet there is a maximum sentence of 51 weeks in prison, a fine, or both. We need to have the context of a huge cultural shift when we consider the nature of the harm we are talking about.
J.S. Mill’s harm principle has now been expanded, as previously discussed, to include traumatic harm caused by words. Speakers are regularly no-platformed for ideas that we are told cause psychological harm, at universities and more broadly as part of the whole cancel culture discussion. Over the last decade, harm and safety have come no longer to refer just to physical safety but have been conflated. Historically, we understood the distinction between physical threats and violence as distinct from speech, however aggressive or incendiary that speech was; we did not say that speech was the same as or interchangeable with bullets or knives or violence—and now we do. I want us to at least pause here.
What counts as psychological harm is not a settled question. The worry is that we have an inability to ascertain objectively what psychological harm has occurred. This will inevitably lead to endless interpretation controversies and/or subjective claims-making, at least some of which could be in bad faith. There is no median with respect to how humans view or experience controversial content. There are wildly divergent sensibilities about what is psychologically harmful. The social media lawyer Graham Smith made a really good point when he said that speech is not a physical risk,
“a tripping hazard … a projecting nail … that will foreseeably cause injury … Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people.”
That is true.
We have seen an example of the potential disputes over what creates psychological harm in a case in the public realm over the past week. The former Culture Secretary, Nadine Dorries, who indeed oversaw much of this Bill in the other place, had her bullying claims against the SNP’s John Nicolson MP overturned by the standards watchdog. Her complaints had previously been upheld by the standards commissioner. John Nicolson tweeted, liked and retweet offensive and disparaging material about Ms Dorries 168 times over 24 hours—which, as they say, is a bit OTT. He “liked” tweets describing Ms Dorries as grotesque, a “vacuous goon” and much worse. It was no doubt very unpleasant for her and certainly a personalised pile-on—the kind of thing the noble Baroness, Lady Kennedy, just talked about—and Ms Dorries would say it was psychologically harmful. But her complaint was overturned by new evidence that led to the bullying claim being turned down. What was this evidence? Ms Dorries herself was a frequent and aggressive tweeter. So, somebody is a recipient of something they say causes them psychological harm, and it has now been said that it does not matter because they are the kind of person who causes psychological harm to other people. My concern about turning this into a criminal offence is that the courts will be full of those kinds of arguments, which I do not think we want.
I will follow up in writing on that point.
Before I conclude, I will mention briefly the further government amendments in my name, which make technical and consequential amendments to ensure that the communications offences, including the self-harm offence, have the appropriate territorial extent. They also set out the respective penalties for the communications offences in Northern Ireland, alongside a minor adjustment to the epilepsy trolling offence, to ensure that its description is more accurate.
I hope that noble Lords will agree that the new criminal laws that we will make through this Bill are a marked improvement on the status quo. I hope that they will continue to support the government amendments. I express my gratitude to the Law Commission and to all noble Lords—
Just before the Minister sits down—I assume that he has finished his brief on the self-harm amendments; I have been waiting—I have two questions relating to what he said. First, if I heard him right, he said that the person forwarding on is also committing an offence. Does that also apply to those who set up algorithms that disseminate, as opposed to one individual forwarding on to another individual? Those are two very different scenarios. We can see how one individual forwarding to another could be quite targeted and malicious, and we can see how disseminating through an algorithm could have very widespread harms across a lot of people in a lot of different groups—all types of groups—but I am not clear from what he said that that has been caught in his wording.
Secondly—I will ask both questions while I can—I asked the Minister previously why there have been no prosecutions under the Suicide Act. I understood from officials that this amendment creating an offence was to reflect the Suicide Act and that suicide was not included in the Bill because it was already covered as an offence by the Suicide Act. Yet there have been no prosecutions and we have had deaths, so I do not quite understand why I have not had an answer to that.
I will have to write on the second point to try to set that out in further detail. On the question of algorithms, the brief answer is no, algorithms would not be covered in the way a person forwarding on a communication is covered unless the algorithm has been developed with the intention of causing serious self-harm; it is the intention that is part of the test. If somebody creates an algorithm intending people to self-harm, that could be captured, but if it is an algorithm generally passing it on without that specific intention, it may not be. I am happy to write to the noble Baroness further on this, because it is a good question but quite a technical one.
It needs to be addressed, because these very small websites already alluded to are providing some extremely nasty stuff. They are not providing support to people and helping decrease the amount of harm to those self-harming but seem to be enjoying the spectacle of it. We need to differentiate and make sure that we do not inadvertently let one group get away with disseminating very harmful material simply because it has a small website somewhere else. I hope that will be included in the Minister’s letter; I do not expect him to reply now.
Some of us are slightly disappointed that my noble friend did not respond to my point on the interaction of Clause 160 with the illegal content duty. Essentially, what appears to be creating a criminal offence could simply be a channel for hyperactive censorship on the part of the platforms to prevent the criminal offence taking place. He has not explained that interaction. He may say that there is no interaction and that we would not expect the platforms to take any action against offences under Clause 160, or that we expect a large amount of action, but nothing was said.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 3 months ago)
Lords ChamberMy Lords, I am most grateful to the noble Lord, Lord Clement-Jones, for tabling the amendment. If I had been quicker, I would have added my name to it, because he may— I use the word “may” advisedly, because I am not sure—have identified quite a serious gap in terms of future-proofing. As far as I understand it, in a somewhat naive way, the amendment probes whether there is a gap between provider-generated content and user-generated content and whether provider-generated content could lead to a whole lot of ghastly stuff on the metaverse without any way of tackling it because it is deemed to have fallen outside the scope of the Bill.
I am grateful to Carnegie UK for having tried to talk me through this—it is pretty complicated. As a specific example, I understand that a “Decentraland” avatar pops up on gaming sites, and it is useful because it warns you about the dangers of gambling and what it can lead to. But then there is the problem about the backdrop to this avatar: at the moment, it seems to be against gambling, but you can see how those who have an interest in gambling would be quite happy to have the avatar look pretty hideous but have a backdrop of a really enticing casino with lots of lights and people streaming in, or whatever. I am not sure where that would fit, because it seems that this type of content would be provider-generated. When it comes to the metaverse and these new ways of interacting with 3D immersion, I am not clear that we have adequately caught within the Bill some of these potentially dangerous applications. So I hope that the Minister will be able to clarify it for us today and, if not, possibly to write between now and the next time that we debate this, because I have an amendment on future-proofing, but it is in a subsequent group.
My Lords, I am interested to hear what the Minister says, but could he also explain to the House the difference in status of this sort of material in Part 5 versus Part 3? I believe that the Government brought in a lot of amendments that sorted it out and that many of us hoped were for the entire Bill, although we discovered, somewhat to our surprise, that they were only in Part 5. I would be interested if the Minister could expand on that.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 3 months ago)
Lords ChamberMy Lords, I shall speak to my Amendment 275A in this group. It would place a duty on Ofcom to report annually on areas where our legal codes need clarification and revision to remain up to date as new technologies emerge—and that is to cover technologies, some of which we have not even thought of yet.
Government Amendments 206 and 209 revealed the need for an amendment to the Bill and how it would operate, as they clarify that reference to pornographic content in the Bill includes content created by a bot. However, emerging technologies will need constant scrutiny.
As the noble Lord, Lord Clement-Jones, asked, what about provider content, which forms the background to the user interaction and may include many harms. For example, would a game backdrop that includes anti-Semitic slurs, a concentration camp, a sex shop or a Ku Klux Klan rally be caught by the Bill?
The Minister confirmed that “content” refers to anything communicated by means of an internet service and the encounter includes any content that individuals read, view, hear or otherwise experience, making providers liable for the content that they publish. Is this liable under civil, regulatory or criminal law?
As Schedule 1 goes to some lengths to exempt some service-to-provider content, can the Minister for the record provide chapter and verse, as requested by the noble Lord, Lord Clement-Jones, on provider liability and, in particular, confirm whether such content would be dealt with by the Part 3 duties under the online safety regime or whether users would have to rely on similar law for claims at their own expense through the courts or the police carry the burden of further enforcement?
Last week, the Minister confirmed that “functionality” captures any feature enabling interactions of any description between service users, but are avatars or objects created by the provider of a service, not by an individual user, in scope and therefore subject to risk assessments and their mitigation requirements? If so, will these functionalities also be added to user empowerment tools, enabling users to opt out of exposure to them, or will they be caught only by child safety duties? Are environments provided by a service provider, such as a backdrop to immersive environments, in scope through the definition of “functionality”, “content” or both? When this is provider content and not user-generated content, will this still hold true?
All this points to a deeper issue. Internet services have become more complex and vivid, with extremely realistic avatars and objects indistinguishable from people and objects in the real world. This amendment avoids focusing on negatives associated with AI and new technologies but tries to ensure that the online world is as safe as the offline world should be. It is worth noting that Interpol is already investigating how to deal with criminals in the metaverse and anticipating crimes against children, data theft, money laundering, fraud and counterfeit, ransomware, phishing, sexual assault and harassment, among other things. Many of these behaviours operate in grey areas of the law where it is not clear whether legal definitions extend to the metaverse.
Ofcom has an enormous task ahead, but it is best placed to consider the law’s relationship to new technological developments and to inform Parliament. Updating our laws through the mechanisms proposed in Amendment 275A will provide clarity to the courts, judges, police and prosecution service. I urge the Minister to provide as full an answer as possible to the many questions I have posed. I am grateful to him for all the work he has been doing. If he cannot accept my amendment as worded, will he provide an assurance that he will return to this with a government amendment at Third Reading?
My Lords, I will speak to Amendment 191A in my name. I also support Amendment 186A in the name of the noble Lord, Lord Moylan, Amendment 253 in the name of the noble Lord, Lord Clement-Jones, and Amendment 275A in the name of my noble friend Lady Finlay. I hope that my words will provide a certain level of reassurance to the noble Lord, Lord Moylan.
In Committee and on Report, the question was raised as to how to support the coronial system with information, education and professional development to keep pace with the impact of the fast-changing digital world. I very much welcome the Chief Coroner’s commitment to professional development for coroners but, as the Minister said, this is subject to funding. While it is right that the duty falls to the Chief Coroner to honour the independence and expert knowledge associated with his roles, this amendment seeks to support his duties with written guidance from Ofcom, which has no such funding issue since its work will be supported by a levy on regulated companies—a levy that I argue could usefully and desirably contribute to the new duties that benefit coroners and bereaved parents.
The role of a coroner is fundamental. They must know what preliminary questions to ask and how to triage the possibility that a child’s digital life is relevant. They must know that Ofcom is there as a resource and ally and how to activate its powers and support. They must know what to ask Ofcom for, how to analyse information they receive and what follow-up questions might be needed. Importantly, they must feel confident in making a determination and describing the way in which the use of a regulated service has contributed to a child’s death, in the case that that is indeed their finding. They must be able to identify learnings that might prevent similar tragedies happening in the future. Moreover, much of the research and information that Ofcom will gather in the course of its other duties could be usefully directed at coroners. All Amendment 191A would do is add to the list of reports that Ofcom has to produce with these issues in mind. In doing so, it would do the Chief Coroner the service of contributing to his own needs and plans for professional development.
I turn to Amendment 186A in the name of the noble Lord, Lord Moylan, who makes a very significant point in bringing it forward. Enormous effort goes into creating an aura of exceptionality for the tech sector, allowing it to avoid laws and regulations that routinely apply to other sectors. These are businesses that benefit from our laws, such as intellectual copyright or international tax law. However, they have negotiated a privileged position in which they have privatised the benefits of our attention and data while outsourcing most of the costs of their service to the public purse or, indeed, their users.
Terms and conditions are a way in which a company enters into a clear agreement with its users, who then “pay” for access with their attention and their data: two of the most valuable commodities in today’s digital society. I am very sympathetic to the noble Lord’s wish to reframe people, both adults and children, from a series of euphemisms that the sector employs—such as “users”, “community members”, “creators” or “participants”—to acknowledge their status as consumers who have rights and, in particular, the right to expect the product they use to be safe and for providers to be held accountable if it is not. I join the noble Lord in asserting that there are now six weeks before Third Reading. This is a very valuable suggestion that is worthy of government attention.
Amendment 253 in the name of the noble Lord, Lord Clement-Jones, puts forward a very strong recommendation of the pre-legislative committee. We were a bit bewildered and surprised that it was not taken up at the time, so I will be interested to hear what argument the Minister makes to exclude it, if indeed he does so. I say to him that I have already experienced the frustration of being bumped from one regulator to another. Although my time as an individual or the organisational time of a charity is minor in the picture we are discussing, it is costly in time and resources. I point to the time, resources and potential effectiveness of the regulatory regime. However well oiled and well funded the regulatory regime of the Online Safety Bill is, I do not think it will be as well oiled and well funded as those that it seeks to regulate.
I make it clear that I accept the arguments of not wanting to create a super-regulator or slow down or confuse existing regulators which each have their own responsibilities, but I feel that the noble Lord, Lord Clement-Jones, has approached this with more of a belt-and-braces approach rather than a whole realignment of regulators. He simply seeks to make it explicit that regulators can, should and do have a legal basis on which to work singularly or together when it suits them. As I indicated earlier, I cannot quite understand why that would not be desirable.
Finally, in what is truly a miscellaneous group, I will refer to the amendment in the name of my noble friend Lady Finlay. I support the intent of this amendment and sincerely hope that the Minister will be able to reassure us that this is already in the Bill and will be done by Ofcom under one duty or another. I hope that he will be able to point to something that includes this. I thank my noble friend for raising it, as it harks back to an amendment in Committee in my name that sought to establish that content deemed harmful in one format would be deemed harmful in all formats—whether synthetic, such as AI, the metaverse or augmented reality. As my noble friend alluded to, it also speaks to the debate we had last week in relation to the amendment from the noble Lord, Lord Clement-Jones, about provider content in the metaverse.
I am most grateful to the Minister; perhaps I could just check something he said. There was a great deal of detail and I was trying to capture it. On the question of harms to children, we all understand that the harms to children are viewed more extensively than harms to others, but I wondered: what counts as unregulated services? The Minister was talking about regulated services. What happens if there is machine-generated content which is not generated by any user but by some random codes that are developed and then randomly incite problematic behaviours?
I am happy to provide further detail in writing and to reiterate the points I have made as it is rather technical. Content that is published by providers of user-to-user services themselves is not regulated by the Bill because providers are liable for the content they publish on the services themselves. Of course, that does not apply to pornography, which we know poses a particular risk to children online and is regulated through Part 5 of the Bill. I will set out in writing, I hope more clearly, for the noble Baroness what is in scope to reassure her about the way the Bill addresses the harms that she has rightly raised.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 3 months ago)
Lords ChamberMy Lords, I shall speak to Amendment 245. I would like to thank my noble friend the Minister, and also the Minister on leave, for the conversations that I have had with them about this amendment and related issues. As we have already heard, the platform categorisation is extremely important. So far, much of it is unknown, including which sites are actually going to be in which categories. For example, we have not yet seen any proposed secondary regulations. As my noble friend has just outlined, special duties apply, especially for those sites within category 1—user empowerment in particular, but also other duties relating to content and fraudulent advertisements.
Clause 85 and Schedule 11 set out the thresholds for determining which sites will be in category 1, category 2A or category 2B. I am very mindful of the exhortation of the noble Lord, Lord Stevenson, about being brief, but it is amazing how much you have to say about one word to explain this amendment. This amendment proposes to change an “and” to an “or” in relation to determining which sites would fall within category 1. It would move from a test of size “and” functionality to a test of size “or” functionality. This would give Ofcom more flexibility to decide which platforms really need category 1 designation. Category 1 should not be decided just on size; it should also be possible to determine it on the basis of functionality.
Functionality is defined in the Bill in Clause 208. We will get to those amendments shortly, but there is no doubt from what the Government have already conceded, or agreed with those of us who have been campaigning passionately on the Bill for a number of years, that functionality can make a platform harmful. It is perfectly possible to have small platforms that both carry highly harmful content and themselves become harmful in the way that they are designed. We have heard many examples and I will not detain the House with them, but I draw attention to two particular sites which capture how broad this is. The perpetrators of offline hate crimes are often linked to these small platforms. For example, the perpetrator of the 2018 Tree of Life synagogue mass shooting had an online presence on the right-wing extremist social network Gab. In the UK, Jake Davison, the self-proclaimed incel who killed five people in Plymouth in 2021, frequented smaller incel forums after he was banned from Reddit in the days leading up to the mass shooting.
I also want to share with noble Lords an email that I received just this week from a family who had been to see their Member of Parliament, Matt Rodda MP, and also the noble Baroness, Lady Kidron, who I know is very regretful that she cannot be here today. I thank Victoria and Jean Eustace for sharing the story of their sister and daughter. Victoria wrote: “I am writing to you regarding the Online Safety Bill, as my family and I are concerned it will not sufficiently protect vulnerable adults from harm. My sister, Zoe Lyalle, killed herself on 26 May 2020, having been pointed towards a method using an online forum called Sanctioned Suicide. Zoe was 18 years old at the time of her death and as such technically an adult, but she was autistic, so she was emotionally less mature than many 18 year- olds. She found it difficult to critically analyse written content”. She says that “The forum in question is not large and states on its face that it does not encourage suicide, although its content does just that”. The next part I was even more shocked about: “Since Zoe’s death, we have accessed her email account. The forum continues to email Zoe, providing her with updates on content she may have missed while away from the site, as well as requesting donations. One recent email included a link to a thread on the forum containing tips on how best to use the precise method that Zoe had employed”.
In her note to me, the Minister on leave said that she wanted to catch some of the platforms we are talking about with outsized influence. In my reply, I said that those sites on which people are encouraged to take their own lives or become radicalised and therefore take the harms they are seeing online into the real world undoubtedly exercise influence and should be tackled.
It is also perfectly possible for us to have large but safe platforms. I know that my noble friend Lord Moylan may want to discuss this in relation to sites that he has talked about already on this Bill. The risk of the current drafting is a flight of users from these large platforms, newly categorised as category 1, to the small, non-category 1 platforms. What if a platform becomes extremely harmful very quickly? How will it be recategorised speedily but fairly and involving parliamentary oversight?
The Government have run a variety of arguments as to why the “and” in the Bill should not become an “or”. They say that it creates legal uncertainty. Every Bill creates legal uncertainty; that is why we have an army of extremely highly paid lawyers, not just in this country but around the world. They say that what we are talking about is broader than illegal content or content related to children’s safety, but they have already accepted an earlier amendment on safety by design and, in subsections (10) to (12) of Clause 12, that specific extra protections should be available for content related to
“suicide or an act of deliberate self-injury, or … an eating disorder or behaviours associated with an eating disorder”
or abusive content relating to race, religion, sex, sexual orientation, disability or gender reassignment and that:
“Content is within this subsection if it incites hatred against people”.
The Government have already breached some of their own limits on content that is not just illegal or relates to child safety duties. In fact, they have agreed that that content should have enhanced triple-shield protection.
The Government have also said that they want to avoid burdens on small but low-harm platforms. I agree with that, but with an “or” it would be perfectly possible for Ofcom to decide by looking at size or functionality and to exclude those smaller platforms that do not present the harm we all care about. The Minister may also offer me a review of categorisation; however, it is a review of the tiers of categorisation and not the sites within the categories, which I think many of us will have views on over the years.
I come to what we should do on this final day of Report. I am very thankful to those who have had many conversations on this, but there is a fundamental difference of opinion in this House on these matters. We will talk about functionality shortly and I am mindful of the pre-legislative scrutiny committee’s recommendation that this legislation should adopt
“a more nuanced approach, based not just on size and high-level functionality, but factors such as risk, reach, user base, safety performance, and business model”.
There should be other factors. Ofcom should have the ability to decide whether it takes one factor or another, and not have a series of all the thresholds to be passed, to give it the maximum flexibility. I will listen very carefully to what my noble friend the Minister and other noble Lords say, but at this moment I intend to test the opinion of the House on this amendment.
My Lords, I strongly support Amendment 245. The noble Baroness, Lady Morgan of Cotes, has explained the nub of the problem we are facing—that size and functionality are quite separate. You can have large sites that perform a major social function and are extremely useful across society. Counter to that, you can have a small site focused on being very harmful to a small group of people. The problem is that, without providing the flexibility to Ofcom to determine how the risk assessment should be conducted, the Bill would lock it into leaving these small, very harmful platforms able to pursue their potentially ever-increasingly harmful activities almost out of sight. It does nothing to make sure that their risk assessments are appropriate.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Finlay of Llandaff
Main Page: Baroness Finlay of Llandaff (Crossbench - Life peer)Department Debates - View all Baroness Finlay of Llandaff's debates with the Department for Digital, Culture, Media & Sport
(1 year, 1 month ago)
Lords ChamberMy Lords, I shall contribute briefly from these Benches because it is important for us all to be aware of just how much people outside have been watching the progress of the Bill. Indeed, today in the Public Gallery we have some bereaved parents who have suffered at the hands of things that have come up on the internet. We have been very privileged, all the way through the Bill, to be able to hear from people who have been victims and who have genuinely wanted to improve things for others and avoid other problems. The collaborative nature with which everyone has approached the Bill has, we hope, achieved those goals for everyone.
We all need to wish the noble Lord, Lord Grade, good luck and all the best as he takes on an incredibly important scrutiny role. I am sure that in years to come we will be looking at post-legislative scrutiny. In the meantime, I shall not name everybody, apart from putting the Minister in prime position; I thank him and everyone for having worked so hard, because I hear from outside that that work is greatly appreciated.
My Lords, I too thank the Minister for his swift and concise introduction, which very carefully covered the ground without raising any issues that we have to respond to directly. I am grateful for that as well.
The noble Lord, Lord Clement-Jones, was his usual self. The only thing that I missed, of course, was the quotation that I was sure he was going to give from the pre-legislative scrutiny report on the Bill, which has been his constant prompt. I also think that the noble Baroness, Lady Finlay, was very right to remind us of those outside the House who we must remember as we reach the end of this stage.
Strangely, although we are at the momentous point of allowing this Bill to go forward for Royal Assent, I find that there is actually very little that needs to be said. In fact, everything has been said by many people over the period; trying to make any additional points would be meretricious persiflage. So I will make two brief points to wind up this debate.
First, is it not odd to reflect on the fact that this historic Parliament, with all our archaic rules and traditions, has the capacity to deal with a Bill that is regulating a technology which most of us have difficulty in comprehending, let alone keeping up with? However, we have done a very good job and, as a result, I echo the words that have already been said; I think the internet will now be a much safer place for children to enjoy and explore, and the public interest will be well served by this Bill, even though we accept that it is likely to only be the first of a number of Bills that will be needed in the years to come.
Secondly, I have been reflecting on the offer I made to the Government at Second Reading, challenging them to work together with the whole House to get the best Bill that we could out of what the Commons had presented to us. That of course could have turned out to be a slightly pointless gesture if nobody had responded positively—but they did. I particularly thank the Minister and the Bill team for rising to the challenge. There were problems initially, but we got there in the end.
More widely, there was, I know, a worry that committing to working together would actually stifle debate and somehow limit our crucial role of scrutiny. But actually I think it had the opposite effect. Some of the debates we had in Committee, from across the House, were of the highest standard, and opened up issues which needed to be resolved. People listened to each other and responded as the debate progressed. The discussion extended to the other place. It is very good to see Sir Jeremy Wright here; he has played a considerable role in resolving the final points.
It will not work for all Bills, but if the politics can be ignored, or at least put aside, it seems to make it easier to get at the issues that need to be debated in the round. In suggesting this approach, I think we may have found a way of getting the best out of our House —something that does not always occur. I hope that lesson can be listened to by all groups and parties.
For myself, participating in this Bill and the pre-legislative scrutiny committee which preceded it has been a terrific experience. Sadly, a lot of people who contributed to our discussions over that period cannot be here today, but I hope they read this speech in Hansard, because I want to end by thanking them, and those here today, for being part of this whole process. We support the amendments before the House today and wish good luck to the noble Lord, Lord Grade.