Online Harms

Damian Collins Excerpts
Thursday 19th November 2020

(3 years, 4 months ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Damian Collins Portrait Damian Collins (Folkestone and Hythe) (Con)
- Hansard - -

I congratulate my right hon. and learned Friend the Member for Kenilworth and Southam (Jeremy Wright) on his excellent speech introducing this debate. We need to be clear that the online harms White Paper response from the Government is urgently needed, as is the draft Bill. We have been discussing this for several years now. When I was Chair of the Digital, Culture, Media and Sport Committee, we published a report in the summer of 2018 asking for intervention on online harms and calling for a regulatory system based on a duty of care placed on the social media companies to act against harmful content.

There are difficult decisions to be made in assessing what harmful content is and assessing what needs to be done, but I do not believe those decisions should be made solely by the chief executives of the social media companies. There should be a legal framework that they have to work within, just as people in so many other industries do. It is not enough to have an online harms regulatory system based just on the terms and conditions of the companies themselves, in which all Parliament and the regulator can do is observe whether those companies are administering their own policies.

We must have a regulatory body that has an auditing function and can look at what is going on inside these companies and the decisions they make to try to remove and eliminate harmful hate speech, medical conspiracy theories and other more extreme forms of harmful or violent content. Companies such as Facebook say that they remove 95% of harmful content. How do we know? Because Facebook tells us. Has anyone checked? No. Can anyone check? No; we are not allowed to check. Those companies have constantly refused to allow independent academic bodies to go in and scrutinise what goes on within them. That is simply not good enough.

We should be clear that we are not talking about regulating speech. We are talking about regulating a business model. It is a business model that prioritises the amplification of content that engages people, and it does not care whether or not that content is harmful. All it cares about is the engagement. So people who engage in medical conspiracy theories will see more medical conspiracy theories. A young person who engages with images of self-harm will see more images of self-harm. No one is stepping in to prevent that. How do we know that Facebook did all it could to stop the live broadcast of a terrorist attack in Christchurch, New Zealand? No one knows. We have only Facebook’s word for it, and the scale of that problem could have been a lot worse.

The tools and systems of these companies are actively directing people to harmful content. People often talk about how easy it is to search for this material. Companies such as Facebook will say, “We downgrade this material on our site to make it hard to find,” but they direct people to it. People are not searching for it—it is being pushed at them. Some 70% of what people watch on YouTube is selected for them by YouTube, not searched for by them. An internal study done by Facebook in Germany in 2016, which the company suppressed and was leaked to the media this year, showed that 60% of people who joined Facebook groups that shared extremist material did so at the recommendation of Facebook, because they had engaged with material like that before. That is what we are trying to regulate—a business model that is broken—and we desperately need to move on with online harms.