Online Safety Act 2023: Repeal Debate
Full Debate: Read Full DebateTom Collins
Main Page: Tom Collins (Labour - Worcester)Department Debates - View all Tom Collins's debates with the Department for Digital, Culture, Media & Sport
(1 day, 22 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Tom Collins (Worcester) (Lab)
It is a pleasure to serve under your chairship, Mr Pritchard.
At its birth, the internet was envisaged as a great advancement in a free society: decentralised, crowdsourced and open, it would share knowledge across humanity. As it grew, every one of us would own a platform and our voice. Of course, since then bandwidth has increased massively, which means that we now experience a rich variety of media. Storage and compute have increased by many orders of magnitude, which has created the power of big data, and generative capabilities have emerged quite recently, creating a whole new virtual world. Services no longer simply route us to what we were searching for but offer us personalised menus of rich media, some from human sources and some generated to entertain or meet demands.
We are now just starting to recognise the alarming trends that we are discussing today. Such rich media and content has become increasingly harmful. That compute, storage and big data power is being used to collect, predict and influence our most private values, preferences and behaviours. Generative AI is immersing us in a world of reconstituted news, custom facts and bots posing as people. It increasingly feels like a platform now owns every one of us and our voice.
Harms are dangerously impacting our young people. Research from the Centre for Countering Digital Hate illustrates some of the problems. On YouTube, the “Next Video” algorithm was found to be recommending eating disorder content to the account of a UK-based 13-year-old female. In just a few minutes, the account was exposed to material promoting anorexia and weight loss, and more than half the other recommended videos were for content on eating disorders or weight loss.
On TikTok, new teen accounts were found to have been recommended self-harm and eating disorder content within minutes of scrolling the “For You” feed. Suicide content appeared within two and a half minutes, and eating disorder content within eight. Accounts created with phrases such as “lose weight” received three times as many of these videos as standard teen accounts, and 12 times as many self-harm videos. Those are not isolated incidents, and they show the scale and speed at which harmful material can spiral into exponential immersion in worlds of danger for young people.
On X, formerly known as Twitter—a trigger warning for anybody who has been affected by the absolutely appalling Bondi beach Hanukkah attack—following the Manchester synagogue attack, violent antisemitic messages celebrating and calling for further violence were posted and left live for at least a week. ChatGPT has been shown to produce dangerous advice within minutes of account creation, including guidance on self-harm, restrictive diets and substance misuse.
I am grateful to hon. Friends for raising the topic of pornography. I had the immense privilege of being at an event with a room full of men who spoke openly and vulnerably about their experiences with pornography: how it affected their sex lives, their intimacy with their partners or wives, their dynamics of power and respect, and how it infused all their relationships in daily life. They said things such as, “We want to see it, but we don’t want to want to see it.” If adult men—it seems from this experience, at least, perhaps the majority of adult men—are finding it that hard to deal with, how can we begin to comprehend the impact it is having on our children who come across it accidentally?
This can all feel too big to deal with—too big to tackle. It feels immense and almost impossible to comprehend and address. Yet, to some, the Online Safety Act feels like a sledgehammer cracking a nut. I would say it is a sledgehammer cracking a deeply poisonous pill in a veritable chemistry lab of other psychoactive substances that the sledgehammer completely misses and will always be too slow and inaccurate to hit. We must keep it, but we must do better.
As an engineer, I am very aware that since the industrial revolution, when physical machines suddenly became immensely more powerful and complex, a whole world of not just regulations but technical standards has been built. It infuses our daily lives, and we can barely touch an object in this room that has not been built and verified to some sort of standard—a British, European or global ISO standard—for safety. We should be ready to reflect that model in the digital world. A product can be safe or unsafe. We can validate it to be safe, design it to be safe, and set criteria that let us prove it—we have shown that in our physical world since the industrial revolution. So how do we now begin to put away the big, blunt instrument of regulation when the problem seems so big and insurmountable?
John Slinger (Rugby) (Lab)
Ofcom officials came before the Speaker’s Conference, of which I am a member, so I declare that interest. They spoke about section 100 of the Act, which gives Ofcom the power to request certain types of information on how, for example, the recommender systems work on the companies’ algorithms. Unfortunately, they said that could be “complicated and challenging to do”, but one thing they spoke about very convincingly was that they want to require—in fact, they can require—those companies to put information, particularly about the algorithms, in the public domain to help researchers. That could really help with the point my hon. Friend is making about creating regulations that improve safety for our population.
Tom Collins
I thank my hon. Friend for his remark. He is entirely right. In my own experience of engineering products, very critically, for safety, it was incumbent upon us to be fully open about everything we had done with those regulating and certifying our products for approval. We had numerous patents on our technology, which was new and emerging and had immense potential and value, yet we were utterly open with those notified bodies to ensure that our products were safe.
Similarly, I was fortunate enough to be able to convene industry to share the key safety insights that we were discovering early on to make sure that no mistake was ever repeated, and that the whole industry was able to innovate and develop in a safe way. I thank my hon. Friend the Member for Rugby (John Slinger) for his comments, and I strongly agree that there is no excuse for a lack of openness when it comes to safety.
How do we move forward? The first step is to start breaking down the problem. I have found it helpful to describe it in four broad categories, including hazards that apply to the individual simply through exposure. This would be content such as pornography, violence and images of or about abuse. And then there are hazards that apply to an individual by virtue of interaction, such as addictive user interfaces or personified GPTs. We cannot begin to comprehend the potential psychological harms that could come to human beings when we start to promote attachment with machines. There is no way we can have evidence to inform how safe or harmful that would be, but I suggest that all the knowledge that exists in the psychology and psychiatric communities would probably point to it being extremely risky and dangerous.
We have discussed recommendation algorithms at length. There are also societal harms that affect us collectively by exposure. These harms could be misinformation or echo chambers, for example. The echo chambers of opinion have now expanded to become echo chambers of reality in which people’s worldviews are increasingly being informed by what they see in those spaces, which are highly customised to their existing biases.
Tom Hayes (Bournemouth East) (Lab)
I have met constituents to understand their concerns and ambitions in relation to online safety legislation. There is a clear need to balance the protection of vulnerable users against serious online harms with the need to protect lawful speech as we pragmatically review and implement the Act.
My hon. Friend talks about equipping our younger people, in particular, with the skills to scrutinise what is real or fake. Does he agree that, although we have online safety within the national curriculum, we need to support our teachers to provide consistent teaching in schools across our country so that our children have the skills to think critically about online safety, in the same way as they do about road safety, relationships or consent? [Interruption.]
Before we continue, could I ask that everybody has their phone on silent, please?
Tom Collins
Thank you, Mr Pritchard. I agree with my hon. Friend the Member for Bournemouth East (Tom Hayes). I was fortunate enough to meet the Worcestershire youth cabinet, which is based in my constituency. I was struck that one of its members’ main concerns was their online safety. I was ready for them to ask for more support in navigating the online world, but that is not what they asked for. They said, “Please do not try to support us any more; support our adults to support us. We have trusted adults, parents and teachers, and we want to work with them to navigate this journey. Please help them so that they can help us.” I thank my hon. Friend for his excellent point.
My hon. Friend is making an excellent speech that gets to the heart of some of the tensions. However, he seems to be leaning quite strongly into how the algorithms are self-learning and catch on to what people share organically, which they double down on to commercialise the content. Does he accept that some widely used platforms are not just using an algorithm but are deliberately suppressing mainstream opinion and fact in order to amplify false information and disinformation, and that the people benefiting are those who have malign interests in our country?
Tom Collins
Absolutely. My hon. Friend is right. All those algorithms now have hidden interests, which are sometimes just to increase use, but I think we all strongly suspect that they may stray into political agendas. It is remarkable how powerful that part of the online world is. My personal view is that it is not dissimilar to the R number during covid. If a person sees diverse enough content, their worldview will have enough overlap with other people that it will tend to converge. In the old days, “The Six O’Clock News”, or the news on the radio, provided us with shared content that we all heard, whether we agreed with it or not. That anchored us to a shared narrative.
We are now increasingly in echo chambers of reality where we are getting information that purports to be news and reactions that purport to be from human beings in our communities, both of which reinforce certain views. It is increasingly possible that the R number will become greater than one, and our worldviews will slowly diverge further and further. Such an experiment has never been carried out on a society, but it strikes me that it could be extremely harmful.
While we are exploring this theme, I would like to point to the opposite possibility. In Taiwan, trust in the Government was at 9% when the digital Minister took office. They created a digital platform that reversed the algorithm so that, instead of prioritising content based on engagement—a good proxy for how polarising or divisive something is—it prioritised how strongly content resonated with both sides of the political divide. The stronger a sentiment was in bridging between those two extremes, the more it was prioritised.
Instead of people competing to become more and more extreme, to play to their own audiences, they competed to express sentiments and make statements that bridged the divide more and more. In the end, as the system matured, the Government were able to start to say things like, “Once a sentiment receives 85% agreement and approval, the Government will take it on as a goal. We will work out how to get there, but we will take it as a goal that the public say we should be shooting for.” By the end of the project, public trust in the Government was at 70%. Algorithms are powerful—they can be powerful for good or for ill. What we need to make sure is that they are safe for us as a society. That should be the minimum standard.
Finally, we can imagine harms that apply at a societal level but come through interaction. That comes, I would say, when we start to treat machines as if they are members of our society—as people. When I first started exploring this issue, I thought that we had not seen that yet. Then I realised that we have: bots on social media and fake accounts that we do not know are not human beings. They are not verified as human beings, yet we cannot help but start to believe and trust what we see. I would say that it is only a matter of time before these bots become more and more sophisticated and with more and more of an agenda—more able to build relationships with us and to influence us even more deeply. That is a dangerous threshold, which points to the need for us to deal with the issue in a sophisticated way.
What next? It is critical that we first start to develop tools—technically speaking, these are models—that classify and quantify these hazards to individual people and to us as a society, so that we can understand what is hazardous and what is not. Then, based on that, we can start to build tools and models that allow us to either validate products as safe—they should, I agree, be safe by design—or provide protective features.
Already, some companies are developing protection algorithms that can detect content that is illegal or hazardous in different ways and provide a trigger to an operating system to, for example, mask that by making it blurred or opaque, either at the screen or the camera level. Such tools are rapidly becoming more and more capable, but they are not being deployed. At the moment, there is very little incentive for them to be deployed.
If, for example, we were to standardise in the software environment interfaces or sockets of some kind so that these protective tools could be plugged into operating systems or back ends, we could create a market for developing more and more accurate and capable software.
In the world of physical safety, we use a principle called “state of the art”. In contrast to how we all might understand that term, it does not mean the cutting edge of technology; rather, it means safety features that are common enough that they should be adopted as standard and we should expect to have them. The automotive industry is a great example. Perhaps the easiest feature for me to point to is anti-lock brakes, which started out as a luxury feature in high-end vehicles, but rolled out into more and more cars as they became more affordable and accessible. Now they come as standard on all cars. A car without anti-lock brakes could not be sold because it would not meet the state of the art.
If we apply a similar principle to online protection software, tech companies with capable protections would have a guaranteed market. The digital product manufacturers or service providers would have to keep up; that would drive both innovation and uptake. These are already practised in industry. They cost the public purse nothing and generate growth, high-value jobs and national capabilities. Making the internet safe in the right way does not close it down; it creates freedoms and opens it up—freedom to trust what we are seeing; freedom to use it without being hurt; and freedom to rely on it without endangering our national security.
There is another parallel. We would not dream of building a balcony without a railing, but if we had built one we would not decide that the only way to make it safe was to declare that the balcony was for use only by adults. It still would not be safe. Adults and children alike would inevitably come to harm and many of our regulations would not allow it: in fact, there must be a railing that reaches a certain height and is able to withstand certain forces, and it must be designed with safety in mind and be maintained. We would have an inspection to make sure it was safe. Someone designing or opening a building with an unprotected, unbarriered balcony could easily expect to go to prison. We have come to expect our built environment to be safe in that way; having been made robustly safe for adults, it is also largely safe for children. If we build good standards and regulation, we can all navigate the digital world safely and freely.
Likewise, we need to build the institutions to ensure fast and dynamic enforcement. For services, there are precedents for good enforcement. We have seen great examples of that when sites have not complied, such as TCP ports for payment systems being turned off instantly. That is a really strong motivation for a website to comply. It is fast, dynamic and robust, and is very quickly reversible, as the TCP port can be turned back on and the website can once again accept payments. We need that kind of fast, dynamic enforcement if we are to keep up with the fast and adaptive world working around us.
On the topic of institutions, I would like to point out—I would not be surprised if my hon. Friend the Member for Rugby (John Slinger) expands on this—that when television and radio came into existence, we built the BBC so that we would have a trusted source among those services. It kept us safe, and it also ended up projecting our influence around the world. We need once again to build the institutions or expand them and the infrastructure to provide digital services in our collective interest.
My hon. Friend is making a very good speech; maybe he should consider a career in TED Talks after this. A number of competitor platforms have been tried, such as Bluesky as an alternative to X, but the take-up is not sustained. I wonder whether the monopoly that some of these online platforms have is now so well embedded that people have become attached to them out of habit. As Members, we must all feel the tension at times about whether we should or should not be on some of these platforms.
There is a need for mainstream voices to occupy these spaces to ensure that we do not concede to extremes of any political spectrum, but we are always going to be disadvantaged if the algorithm drives towards those extremes and not to the mainstream. I just test the principle of an online BBC versus whether or not there should be a more level playing field for mainstream content on existing platforms.
Tom Collins
My hon. Friend is, of course, right. If we regulate for safety, we do not need to worry about the ecosystem needing good actors to displace it. At the same time, however, those good actors would have a competitive and valuable role to play, and I do not want to undervalue the currency of trust. Institutions such as the BBC are so robustly trustworthy that they have a unique value to offer, even if we do manage to create a safe ecosystem or market of online services.
I am convening a group of academics to start trying to build the models I discussed as the foundation for technical standards for safe digital products. I invite the Minister to engage the Department in this work. That is vital for the safety of each of us and our children as individuals, and for the security and resilience of our society. I also invite anybody in the technical space of academia or industry exploring some of these models and tools to get in touch with me if they see this debate and are interested.
Only by taking assertive action across all levels of technical, regulatory and legal governance can we ensure the safety of citizens. Only by expanding our institutions can we provide meaningful enforcement, designing and building online products, tools and infrastructure. If we do those things, the internet will be more open, secure, private, valuable and accessible to all of us. Good regulation is the key to a safe and open internet.
My hon. Friend makes an important point, because freedom of expression is guaranteed in the Act. Although we are regulating to make sure that children and young people are protected online, he is right to suggest that that does not mean we are censoring stuff for adult content. The internet is a place where people can access content if they are age-verified to do so, but it cannot be illegal content. The list of issues in schedule 7 to the Act that I read out at the start of my speech is pretty clear on what someone is not allowed to do online, so any illegal content online still remains illegal. We need to work clearly with the online platforms to make sure that that is not being purveyed through them.
We have seen strong examples of this issue in recent months. If we reflect back to Southport, the public turned to local newspapers—we have discussed this many times before—because they wanted fast and regular but trustworthy news. They turned away from social media channels to get the proper story, and they knew they could trust the local newspaper that they were able to pick up and read. I think the public have a very strong understanding of where we are, but I take the point about people who are not as tech-savvy or are impaired in some way, and so may need further protections. My hon. Friend makes the argument very strongly.
I want to turn to AI chatbots, because they were mentioned in terms of mental health. We are clear that AI must not replace trained professionals. The Government’s 10-year health plan lays foundations for a digital front door for mental health care. Last month, the Secretary of State for Science, Innovation and Technology urged Ofcom to use existing powers to protect children from the potential harms of AI chatbots. She is clear that she is considering what more needs to be done. The Department of Health and Social Care is looking at mental health through the 10-year plan, but the Secretary of State for Science, Innovation and Technology has also been clear that she will not allow AI chatbots to affect young people’s mental health, and will address their development, as mentioned by the Liberal Democrat spokesperson, the hon. Member for Harpenden and Berkhamsted (Victoria Collins).
Let me touch on freedom of expression, because it is important to balance that out. It is on the other side of the shadow Minister’s ledger, and rightly so, because safeguards to protect freedom of expression and privacy are built in throughout the Online Safety Act. Services must consider how to protect users’ rights when applying safety measures, including users’ rights to express themselves freely. Providers do not need to take action on content that is beneficial to children—only against content that poses a risk of harm to children on their services. The Act does not prevent adults from seeking out legal content, and does not restrict people posting legal content that others of opposing views may find offensive. There is no removing of freedom of speech. It is a cornerstone of this Government, and under the Act, platforms have duties to protect freedom of speech. It is written into legislation.
Let me reiterate: the Online Safety Act does not limit freedom of speech. In fact, it protects it. My hon. Friend the Member for Worcester (Tom Collins) was clear when he said in his wonderful speech that making the internet a safe space promotes freedom of speech. Indeed it does, because it allows us to have the confidence that we can use online social media platforms, trust what we are reading and seeing, and know that our children are exposed to age-appropriate content.
I will address age assurance, which was mentioned by the hon. Member for Dewsbury and Batley (Iqbal Mohamed). Ofcom is required to produce a report on the use of age assurance technologies, including the effectiveness of age assurance, due in July 2026—so in seven months’ time. That allows sufficient time for these measures to embed in before considering further action, but the Government continue to monitor the impact of circumvention techniques such as VPNs and the effectiveness of the Act in protecting children. We will not hesitate to go further if necessary, but we are due that report in July 2026, which will be 12 months from the implementation of the measures.
The Liberal Democrat spokesperson asked about reviewing the Act. My previous comments covered some of that, but it is critical that we understand how effective the online safety regime is, and monitoring and evaluating that is key. My Department, Ofcom and the Home Office have developed a framework to monitor the implementation of the Act and evaluate the core outcomes from it.
Tom Collins
The Minister describes the review of the Act and how we have a rapidly growing list of potential harms. It strikes me that we are up against a very agile and rapidly developing world. I recently visited the BBC Blue Room and saw the leading edge of consumer-available technology, and it was quite disturbing to see the capabilities that are coming online soon. In the review of the Act, is there scope to move from a register of harms into perhaps domains of safety, such as trauma, addiction or attachment, where the obligation would be on service providers or manufacturers to ensure their products were safe across those domains? Once again, there could be security for smaller businesses available from the world of technical standards, where if a business is offering a simple service and meets an industry-developed standard, they have presumption of compliance. The British Standards Institution has demonstrated very rapid development of that through the publicly available specification system, and that is available to help us to navigate this rapidly. Could that be in scope?
Interventions should be brief, but I am very kind.