(4 days, 10 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Tom Collins
Thank you, Mr Pritchard. I agree with my hon. Friend the Member for Bournemouth East (Tom Hayes). I was fortunate enough to meet the Worcestershire youth cabinet, which is based in my constituency. I was struck that one of its members’ main concerns was their online safety. I was ready for them to ask for more support in navigating the online world, but that is not what they asked for. They said, “Please do not try to support us any more; support our adults to support us. We have trusted adults, parents and teachers, and we want to work with them to navigate this journey. Please help them so that they can help us.” I thank my hon. Friend for his excellent point.
My hon. Friend is making an excellent speech that gets to the heart of some of the tensions. However, he seems to be leaning quite strongly into how the algorithms are self-learning and catch on to what people share organically, which they double down on to commercialise the content. Does he accept that some widely used platforms are not just using an algorithm but are deliberately suppressing mainstream opinion and fact in order to amplify false information and disinformation, and that the people benefiting are those who have malign interests in our country?
Tom Collins
Absolutely. My hon. Friend is right. All those algorithms now have hidden interests, which are sometimes just to increase use, but I think we all strongly suspect that they may stray into political agendas. It is remarkable how powerful that part of the online world is. My personal view is that it is not dissimilar to the R number during covid. If a person sees diverse enough content, their worldview will have enough overlap with other people that it will tend to converge. In the old days, “The Six O’Clock News”, or the news on the radio, provided us with shared content that we all heard, whether we agreed with it or not. That anchored us to a shared narrative.
We are now increasingly in echo chambers of reality where we are getting information that purports to be news and reactions that purport to be from human beings in our communities, both of which reinforce certain views. It is increasingly possible that the R number will become greater than one, and our worldviews will slowly diverge further and further. Such an experiment has never been carried out on a society, but it strikes me that it could be extremely harmful.
While we are exploring this theme, I would like to point to the opposite possibility. In Taiwan, trust in the Government was at 9% when the digital Minister took office. They created a digital platform that reversed the algorithm so that, instead of prioritising content based on engagement—a good proxy for how polarising or divisive something is—it prioritised how strongly content resonated with both sides of the political divide. The stronger a sentiment was in bridging between those two extremes, the more it was prioritised.
Instead of people competing to become more and more extreme, to play to their own audiences, they competed to express sentiments and make statements that bridged the divide more and more. In the end, as the system matured, the Government were able to start to say things like, “Once a sentiment receives 85% agreement and approval, the Government will take it on as a goal. We will work out how to get there, but we will take it as a goal that the public say we should be shooting for.” By the end of the project, public trust in the Government was at 70%. Algorithms are powerful—they can be powerful for good or for ill. What we need to make sure is that they are safe for us as a society. That should be the minimum standard.
Finally, we can imagine harms that apply at a societal level but come through interaction. That comes, I would say, when we start to treat machines as if they are members of our society—as people. When I first started exploring this issue, I thought that we had not seen that yet. Then I realised that we have: bots on social media and fake accounts that we do not know are not human beings. They are not verified as human beings, yet we cannot help but start to believe and trust what we see. I would say that it is only a matter of time before these bots become more and more sophisticated and with more and more of an agenda—more able to build relationships with us and to influence us even more deeply. That is a dangerous threshold, which points to the need for us to deal with the issue in a sophisticated way.
What next? It is critical that we first start to develop tools—technically speaking, these are models—that classify and quantify these hazards to individual people and to us as a society, so that we can understand what is hazardous and what is not. Then, based on that, we can start to build tools and models that allow us to either validate products as safe—they should, I agree, be safe by design—or provide protective features.
Already, some companies are developing protection algorithms that can detect content that is illegal or hazardous in different ways and provide a trigger to an operating system to, for example, mask that by making it blurred or opaque, either at the screen or the camera level. Such tools are rapidly becoming more and more capable, but they are not being deployed. At the moment, there is very little incentive for them to be deployed.
If, for example, we were to standardise in the software environment interfaces or sockets of some kind so that these protective tools could be plugged into operating systems or back ends, we could create a market for developing more and more accurate and capable software.
In the world of physical safety, we use a principle called “state of the art”. In contrast to how we all might understand that term, it does not mean the cutting edge of technology; rather, it means safety features that are common enough that they should be adopted as standard and we should expect to have them. The automotive industry is a great example. Perhaps the easiest feature for me to point to is anti-lock brakes, which started out as a luxury feature in high-end vehicles, but rolled out into more and more cars as they became more affordable and accessible. Now they come as standard on all cars. A car without anti-lock brakes could not be sold because it would not meet the state of the art.
If we apply a similar principle to online protection software, tech companies with capable protections would have a guaranteed market. The digital product manufacturers or service providers would have to keep up; that would drive both innovation and uptake. These are already practised in industry. They cost the public purse nothing and generate growth, high-value jobs and national capabilities. Making the internet safe in the right way does not close it down; it creates freedoms and opens it up—freedom to trust what we are seeing; freedom to use it without being hurt; and freedom to rely on it without endangering our national security.
There is another parallel. We would not dream of building a balcony without a railing, but if we had built one we would not decide that the only way to make it safe was to declare that the balcony was for use only by adults. It still would not be safe. Adults and children alike would inevitably come to harm and many of our regulations would not allow it: in fact, there must be a railing that reaches a certain height and is able to withstand certain forces, and it must be designed with safety in mind and be maintained. We would have an inspection to make sure it was safe. Someone designing or opening a building with an unprotected, unbarriered balcony could easily expect to go to prison. We have come to expect our built environment to be safe in that way; having been made robustly safe for adults, it is also largely safe for children. If we build good standards and regulation, we can all navigate the digital world safely and freely.
Likewise, we need to build the institutions to ensure fast and dynamic enforcement. For services, there are precedents for good enforcement. We have seen great examples of that when sites have not complied, such as TCP ports for payment systems being turned off instantly. That is a really strong motivation for a website to comply. It is fast, dynamic and robust, and is very quickly reversible, as the TCP port can be turned back on and the website can once again accept payments. We need that kind of fast, dynamic enforcement if we are to keep up with the fast and adaptive world working around us.
On the topic of institutions, I would like to point out—I would not be surprised if my hon. Friend the Member for Rugby (John Slinger) expands on this—that when television and radio came into existence, we built the BBC so that we would have a trusted source among those services. It kept us safe, and it also ended up projecting our influence around the world. We need once again to build the institutions or expand them and the infrastructure to provide digital services in our collective interest.
My hon. Friend is making a very good speech; maybe he should consider a career in TED Talks after this. A number of competitor platforms have been tried, such as Bluesky as an alternative to X, but the take-up is not sustained. I wonder whether the monopoly that some of these online platforms have is now so well embedded that people have become attached to them out of habit. As Members, we must all feel the tension at times about whether we should or should not be on some of these platforms.
There is a need for mainstream voices to occupy these spaces to ensure that we do not concede to extremes of any political spectrum, but we are always going to be disadvantaged if the algorithm drives towards those extremes and not to the mainstream. I just test the principle of an online BBC versus whether or not there should be a more level playing field for mainstream content on existing platforms.
Tom Collins
My hon. Friend is, of course, right. If we regulate for safety, we do not need to worry about the ecosystem needing good actors to displace it. At the same time, however, those good actors would have a competitive and valuable role to play, and I do not want to undervalue the currency of trust. Institutions such as the BBC are so robustly trustworthy that they have a unique value to offer, even if we do manage to create a safe ecosystem or market of online services.
I am convening a group of academics to start trying to build the models I discussed as the foundation for technical standards for safe digital products. I invite the Minister to engage the Department in this work. That is vital for the safety of each of us and our children as individuals, and for the security and resilience of our society. I also invite anybody in the technical space of academia or industry exploring some of these models and tools to get in touch with me if they see this debate and are interested.
Only by taking assertive action across all levels of technical, regulatory and legal governance can we ensure the safety of citizens. Only by expanding our institutions can we provide meaningful enforcement, designing and building online products, tools and infrastructure. If we do those things, the internet will be more open, secure, private, valuable and accessible to all of us. Good regulation is the key to a safe and open internet.