Chi Onwurah
Main Page: Chi Onwurah (Labour - Newcastle upon Tyne Central and West)(1 day, 19 hours ago)
Commons ChamberI am grateful to the Backbench Business Committee for allocating time for this statement. Today I speak on behalf of the Science, Innovation and Technology Committee, but also the hundreds of thousands of people whose lives were profoundly affected by last year’s riots, as well as everyone impacted by the long shadow of social media misinformation.
I want to put on the record my thanks to the Committee Clerks and specialists who have supported this inquiry, and the many witnesses who gave evidence. They have helped to shape our report, and we are very grateful, particularly to those who shared their real-life experience of the riots and the online hate that accompanied them. The Committee would also like to extend our deepest sympathies to the families of the three little girls murdered in Southport, and everyone affected.
Like many nations, the UK is grappling with the immense challenge of regulating global tech giants—companies whose platforms shape our societies, economies and democracies, often with resources that dwarf those of Governments. For example, the UK’s entire public sector budget is about equal to Meta’s market capitalisation. As the representative of the British people, it is essential that Parliament understands the impact of these companies, and is able to scrutinise their actions and regulate them in the public interest, where necessary. However, the Committee experienced significant challenges in seeking to do that during the course of the inquiry. We were reassured by statements from Google, Meta, TikTok and X in our evidence session that they accepted their responsibility to be accountable to the British people through Parliament, and we hope to see that in practice as our work in this area continues.
The horrific attacks in Southport on 29 July 2024 and the violent unrest that followed are a stark reminder of the real-world consequences of the viral spread of online misinformation. Hateful and misleading content spread rapidly, amplified by opaque recommendation algorithms. Protests turned violent, often targeting Muslim and migrant communities, driven in part by the spread of these messages. These events provided a snapshot of how online activity can contribute to real-world violence and hate.
Many parts of the long-awaited Online Safety Act 2023 were not fully in force at the time of the riots, but the Committee found little evidence that they would have made a difference if they were. Moreover, the Act is out of date. It regulates at a technology and content level, rather than on principles or outcomes, and therefore fails to adequately address generative artificial intelligence —ChatGPT, deep fakes and synthetic misinformation—even as it becomes cheaper and easier to access. Generative AI will make the next misinformation crisis much more dangerous.
Having spent six years working for Ofcom before entering Parliament, I believe strongly that regulating technology does not work. Our online safety regime should be based on principles that remain sound in the face of technological development. Social media has made many important and positive contributions, helping to democratise access to a public voice and connect people far and wide. It also has significant risks.
The advertisement-based business models of most social media companies mean that they promote engaging content, often regardless of its authenticity. That spills out across the entire internet via the unclear, under-regulated digital advertising market, incentivising the creation of content that will perform well on social media, as we saw during the 2024 unrest.
This is not just a social media problem; it is a systemic issue that promotes harmful content and undermines public trust. Our concerns were exacerbated when we questioned representatives of regulators and the Government. We were met with confusion and contradiction at high levels, and it became evident that the UK’s online safety regime has some major holes.
After four public sessions, more than 80 written submissions and extensive deliberations, our findings are clear: the British people are not adequately protected from online harms. We have identified five key principles that we believe are crucial for the regulation of social media and related technologies and drive our recommendations to Government.
Our first principle is public safety. Algorithmically enhanced misinformation is a danger that companies, Government, law enforcement and security services need to work together to address. That is basically saying that misinformation is harmful, which may sound obvious, but it has not been recognised as such. As a consequence, platforms should be compelled to demote fact-checked misinformation and establish processes to take more stringent measures during crises. We propose that the Government carry out research into how platforms should tackle misinformation and how far recommendation algorithms spread harm. Furthermore, all AI content should be visibly labelled.
Our second principle is free and safe expression. Steps to tackle amplified misinformation should be in line with the fundamental right to free expression, and measures to meet misinformation must align with that right.
Our third principle is responsibility. Users should be held liable for what they post online, but the platforms on which they post are also responsible and should be held accountable for the impact of the amplification of harmful content. That may sound obvious—indeed, it is not the first time that a Select Committee has said this—yet widespread uncertainty remains as to whether platforms have a responsibility for the legal content that they host and distribute. The report recommends that platforms be obliged to undertake risk assessments and report on content that is legal but harmful. New regulatory oversight, clear and enforceable standards and proportionate penalties are needed to cover the process of digital advertising.
Our fourth principle is control. Critically, users should have control over both their personal data and what they see online. We recommend that users have a right to reset the data used by platform recommendation algorithms.
Our fifth and final principle is transparency. The technology used by social media companies, including recommendation algorithms and generative AI, should be transparent, accessible and explainable to public authorities. Transparency is needed for participants in the digital advertising market. Basically, if we cannot explain it, we cannot understand the harm it may do.
I am a tech evangelist: I believe that technology, like politics, is an engine of progress, but that does not mean we have to accept it as it is. Our report sets out detailed recommendations to ensure that we do not have a repeat of the violent and harmful riots last year. We urge the Government to acknowledge that the Online Safety Act is not fit for purpose; to adopt these five principles to build a stronger, future-proof online safety regime, with clear, enforceable standards for social media platforms; and to implement our recommendations. Without action, it is only a matter of time before we face another crisis like the Southport riots, or even worse.
I thank the hon. Lady and the Select Committee that she chairs for delivering this important review. I also thank her for her statement to the House, which has highlighted the scale of the challenge we face in relation to the proliferation of misleading and harmful content online. I join her in putting out my prayers and sympathies to all those affected by the horrors in Southport last year.
Given the report’s findings that young people are particularly susceptible to misleading and harmful content and online radicalisation, due to the stage of their cognitive development, does the hon. Lady consider that the Government should commit to conducting a review of the evidential case for raising the digital age of consent for social media platforms from 13 to 16?
I thank the hon. Member for his comments. I also thank him for highlighting the particular issue of young people, their cognitive development and the lack of protection they enjoy from misinformation as a consequence. The Committee did not recommend that the Government should commit to a review, but we are considering a further inquiry into the impact on the cognitive development of young people. I am sure that we will have recommendations with regard to that as a consequence.
I thank my hon. Friend and her Committee for highlighting the challenges we face in scrutinising powerful technology companies. As she knows, I am particularly concerned about suicide and self-harm-related content. In 2022, more than three quarters of the individuals surveyed by Samaritans said that they first saw self-harm content online at the age of 14 or younger, often without searching for it. Worryingly, 76% said that they self-harmed more severely after viewing such content online. We have taken important steps forward by implementing the Online Safety Act, but we still face the challenge of regulating emerging technologies as well as small and risky platforms. Does she share my concern about this issue? What is her Committee’s view on how we can tackle and monitor it?
I thank my hon. Friend and constituency neighbour for highlighting this incredibly important issue and for the work that she does in this area. She is absolutely right to say that. The Committee heard very moving and distressing evidence that suicide and self-harm content can be and has been amplified by social media algorithms and that that can play a role in suicide and self-harm, including by young people. Promoting suicide is illegal, and the Online Safety Act introduced an offence of promoting self-harm, but it does not do enough to tackle legal content that promotes suicide or self-harm, as with the rest of legal but harmful content, such as misinformation. The Committee’s recommendations that platforms should be held accountable for the algorithmic amplification of misinformation would address part of what my hon. Friend is concerned about. We hope that in implementing those recommendations, the Government would set out how they would fully address her concerns.
I am a member of the Select Committee over which the hon. Lady presides as Chair, and I thank her and all the staff who helped with this report and inquiry. I know that many of us in this place wear not one, but multiple hats; I also sit on the Joint Committee on Human Rights, and we are doing an inquiry into AI and human rights. Some of the work from this report will be helpful and inform us in looking at some of the key issues, so, with another hat on, I thank her for that.
One of the evidence sessions that has stuck with me from working on the report is when we had social media company bosses in front of us. They talked about how they removed most of the content within 10 minutes in 90% of cases, but they did not accept responsibility for the proliferation of that data, information and content outside their own spheres. What worries me is that, at a time when we have advancing technologies and a great pace of change and we need to maintain regulation at a breakneck pace, companies—particularly social media companies—are unravelling and unpicking their content notes, their monitoring and how they look for and remove information that might be harmful. Does the Chair of the Science, Innovation and Technology Committee agree that we need to do more in regulating current approaches and ensure that the companies do not backslide on their obligations?
I thank the hon. Gentleman, my fellow Committee member, for that question, as well as for his contribution to this report and the work of the Committee, which has been exceedingly valuable. He has raised a really important point; we heard evidence from Meta about Facebook content checking, and how outside the UK and the US it was moving from fact-checking to community notes, which X has also done. The Committee has recommended adopting the principle that platforms are held accountable, which must go hand in hand with those platforms setting out how they can demonstrate that accountability. The report also recommends that the Government undertake research into what effective fact-checking looks like and how misinformation is spread, because one of the things that the Committee—which is a scientific Committee—observed was the lack of real evidence in thar area. That is partly because the algorithms and platforms are so opaque and secretive about how they operate.
I thank my hon. Friend the Chair of the Select Committee, of which I am also a member. This report makes vital recommendations to Government and social media communities about how we can make the online world safer. I am particularly concerned about the impact on young people. Does my hon. Friend agree that harm is happening today? Young people will be going home from school with harmful content being pushed at them. Does she agree that social media companies should not wait for Government to implement these recommendations, but should get on with implementing them today to stop that harm happening to children across this country?
I thank my hon. Friend, both for his outstanding contribution to the report and the Committee’s work and for his question. He is absolutely right—we should not have to force the online companies to take action. They can see the evidence, and they can read this report. They know what is happening to the brains of young people. They should be able to implement the recommendations of the report without waiting for Government action, although I very much believe that Government action will be necessary to ensure those recommendations are implemented across all platforms.
Our Committee’s evidence supports the conclusion that social media business models, particularly the use of recommendation algorithms that push users to see more extreme content and misinformation, incentivise the spread of dangerous content and, consequently, behaviours. As both a member of the Committee and co-chair of the all-party parliamentary group on artificial intelligence— I am another Member who wears many hats—I welcome our Committee’s conclusion that users and social media platforms should be held accountable for the impact that their amplification causes. Does my hon. Friend agree that the report provides clear recommendations to the Government and regulators as to how those platforms should be held accountable, including labelling of all AI-generated content and new regulatory oversight with clear, enforceable standards and penalties? Accountability must have teeth.
I thank my hon. Friend for her contribution to the Committee’s report—her understanding and knowledge of AI have been invaluable. She is absolutely right that AI content must be labelled, and that the platforms and content generators must take action to address this issue now, because the risks are only going to increase. I look forward to the Government ensuring that enforcement has teeth for those platforms that do not take action or are too reckless in the action they take.
I thank my hon. Friend for her Committee’s report, and the team behind it who have drafted it. The report contains concerns and recommendations about digital marketing. WhatsApp is a direct communication tool, for good and for bad, which very few people engage with as though it is a classic social media platform such as Facebook, Instagram or X; it is a place for private conversation, and Meta prides itself on its data being encrypted and secure.
Having said that it would not do so, Meta will now start serving up ads under the guise of advertisers wanting to go where their audiences are, which translates as organisations wanting to make more money through targeted marketing, including Meta, which has 3 billion users of WhatsApp worldwide. If we have our WhatsApp connected to Facebook and Instagram, we may get more personalised ads. Never have I wanted to disconnect my WhatsApp from my Instagram and Facebook more. Already, when we click on Instagram, it follows us to our Facebook feeds. There is no escaping it, so views are being shaped and influenced relentlessly by the organisations that drive the most revenue to the platform owners.
With reference to the recommendations in this report about controlling digital advertising, does my hon. Friend agree that we should be able to communicate with our loved ones without being constantly sold to and influenced, and that we should always have options to opt out for the sake of our own mental health?
I thank my hon. Friend for championing safe and effective technology, and for the points she has made. She is absolutely right. First, it is essential that the technology be properly regulated, and that regulation be based on principles, so WhatsApp and user-to-user communication should be subject to the same principles-based regulatory environment as content communication. Secondly, we must be able to opt out, and to reset the algorithms that drive the advertising we receive. Thirdly and finally, digital advertising is unfortunately a free-for-all, with very little regulation or control. If consumers are to be adequately protected, that needs to change.