Online Safety Bill (Fourth sitting) Debate
Full Debate: Read Full DebateCaroline Ansell
Main Page: Caroline Ansell (Conservative - Eastbourne)Department Debates - View all Caroline Ansell's debates with the Department for Digital, Culture, Media & Sport
(2 years, 6 months ago)
Public Bill CommitteesQ
Poppy Wood: I think it goes without saying that the algorithmic promotion of harmful content is one of the biggest issues with the model we have in big tech today. It is not the individual pieces of content in themselves that are harmful. It is the scale over which they spread out—the amplification of them; the targeting; the bombardment.
If I see one piece of flat-earth content, that does not necessarily harm me; I probably have other counter-narratives that I can explore. What we see online, though, is that if you engage with that one piece of flat-earth content, you are quickly recommended something else—“You like this, so you’ll probably like that”—and then, before you know it, you are in a QAnon conspiracy theory group. I would absolutely say that the algorithmic promotion of harmful content is a real problem. Does that mean we ban algorithms? No. That would be like turning off the internet. You have to go back and ask, how it is that that kind of harm is promoted, and how is it that we are exploiting human behaviour? It is human nature to be drawn to things that we cannot resist. That is something that the Bill really needs to look at.
In the risk assessments, particularly for illegal content and content that is harmful to children, it explicitly references algorithmic promotion and the business model. Those are two really big things that you touched on in the question. The business model is to make money from our time spent online, and the algorithms serve us up the content that keeps us online. That is accounted for very well in the risk assessments. Some of the things around the safety duties do not necessarily account for that, just because you are risk assessing for it. Say you identify that our business model does promote harmful content; under the Bill, you do not have to mitigate that all the time. So I think there are questions around whether the Bill could go further on algorithmic promotion.
If you do not mind, I will quickly come back to the question you asked Eva about reporting. We just do not know whether reporting is really working because we cannot see—we cannot shine a light into these platforms. We just have to rely on them to tell us, “Hey, reporting is working. This many pieces of content were reported and this many pieces of content were taken down.” We just do not know if that is true. A big part of this regime has to be about transparency. It already is, but I think it could go much further in enabling Ofcom, Government, civil society and researchers to say, “Hey, you said that many pieces of content were reported and that many pieces of content were taken down, but actually, it turns out that none of that is true. We are still seeing that stuff online.” Transparency is a big part of the solution around understanding whether reporting is really working and whether the platforms are true to their word.
Q
Poppy Wood: Absolutely. I know that children’s groups are asking for minimum standards for children’s risk assessments, but I agree that they should be across the board. We should be looking for the best standards that we can get. I really do not trust the platforms to do these things properly, so I think we have to be really tough with them about what we expect from them. We should absolutely see minimum standards.
Q
Poppy Wood: Obviously Ofcom is growing. The team at Ofcom are fantastic, and they are hiring really top talent. They have their work cut out in dealing with some of the biggest and wealthiest companies in the world. They need to be able to rely on civil society and researchers to help them to do their job, but I do not think we should rule out Ofcom being able to do these things. We should give it the powers to do them, because that makes this regime have proper teeth. If we find down the line that, actually, it is too much, that is for the Government to sort out with resourcing, or for civil society and researchers to support, but I would not want to rule things out of the Bill just because we think Ofcom cannot do them.
Q
Poppy Wood: Of course, the Bill has quite a unique provision for looking at anonymity online. We have done a big comparison of online safety regulations across the world, and nobody is looking at anonymity in the same way as the UK. It is novel, and with that comes risk. Let us remember that anonymity is a harm reduction mechanism. For lots of people in authoritarian regimes, and even for those in the UK who are survivors of domestic abuse or who want to explore their sexuality, anonymity is a really powerful tool for reducing harm, so we need to remember that when we are talking about anonymity online.
One of my worries about the anonymity agenda in the Bill is that it sounds really good and will resonate really well with the public, but it is very easy to get around, and it would be easy to oversell it as a silver bullet for online harm. VPNs exist so that you can be anonymous. They will continue to exist, and people will get around the rules, so we need to be really careful with the messaging on what the clauses on anonymity really do. I would say that the whole regime should be a privacy-first regime. There is much more that the regime can do on privacy. With age verification, it should be privacy first, and anonymity should be privacy first.
I also have some concerns about the watering down of privacy protections from the draft version of the Bill. I think the language was “duty to account for the right to privacy”, or something, and that right-to-privacy language has been taken out. The Bill could do more on privacy, remembering that anonymity is a harm-reducing tool.
Q
Eva Hartshorn-Sanders: I heard the advice that the representative of the Information Commissioner’s Office gave earlier—he feels that the balance is right at the moment. It is important to incorporate freedom of speech and privacy within this framework in a democratic country. I do not think we need to add anything more than that.
Q
Poppy Wood: I know you have spoken a lot about this over the past few days, but the content of democratic importance clause is a layer of the Bill that makes the Bill very complicated and hard to implement. My concern about these layers of free speech—whether it is the journalistic exemption, the news media exemption or the content of democratic importance clause—is that, as you heard from the tech companies, they just do not really know what to do with it. What we need is a Bill that can be implemented, so I would definitely err on the side of paring back the Bill so that it is easy to understand and clear. We should revisit anything that causes confusion or is obscure.
The clause on content of democratic importance is highly problematic—not just because it makes the Bill hard to implement and we are asking the platforms to decide what democratic speech is, but because I think it will become a gateway for the sorts of co-ordinated disinformation that we spoke about earlier. Covid disinformation for the past two years would easily have been a matter of public policy, and I think the platforms, because of this clause, would have said, “Well, if someone’s telling you to drink hydroxychloroquine as a cure for covid, we can’t touch that now, because it’s content of democratic importance.”
I have another example. In 2018, Facebook said that it had identified and taken down a Facebook page called “Free Scotland 2014”. In 2018—four years later—Facebook identified it. It was a Russian/Iranian-backed page that was promoting falsehoods in support of Scottish independence using fake news websites, with articles about the Queen and Prince Philip wanting to give themselves a pay rise by stealing from the poor. It was total nonsense, but that is easily content of democratic importance. Even though it was backed by fake actors—as we have said, I do not think there is anything in the Bill to preclude that at the moment, or at least to get the companies to focus on it—in 2014, that content would have been content of democratic importance, and the platforms took four years to take it down.
I think this clause would mean that that stuff became legitimate. It would be a major loophole for hate and disinformation. The best thing to do is to take that clause out completely. Clause 15(3) talks about content of democratic importance applying to speech across a diverse range of political opinion. Take that line in that subsection and put it in the freedom of expression clause—clause 19. What you then have is a really beefed-up freedom of expression clause that talks about political diversity, but you do not have layers on top of it that mean bad actors can promote hate and disinformation. I would say that is a solution, and that will make the Bill much easier to implement.
Q
Martin Lewis: As you will know, I had to sue Facebook for defamation, which is a ridiculous thing to do in order to stop scam adverts. I was unable to report the scam adverts to the police, because I had not been scammed—even though it was my face that was in them—and many victims were not willing to come forward. That is a rather bizarre situation, and we got Facebook to put forward £3 million to set up Citizens Advice Scam Action—that is what I settled for, as well as a scam ad reporting tool.
There are two levels here. The problem is who is at fault. Of course, those mainly at fault for scams are the scammers. They are criminals and should be prosecuted, but not enough of them are. You have times when it is the bank’s fault. If a company has not put proper precautions in place, and people have got scammed because it has put up adverts or posts that it should have prevented, they absolutely need to have some responsibility for that. I think you will struggle to have a direct redress system put in place. I would like to see it, but it would be difficult.
It is rather interesting to me that I am worried that the £3 million for Citizens Advice Scam Action, which was at least meant to provide help and support for victims of scams, is going to run out. I have not seen any more money coming from Facebook, Google or any of the other big players out there. If we are not going to fund direct redress, we could at least make sure that they fund a collective form of redress and help for the victims of scams, as a bare minimum. It is very strange that these firms go so quiet on this, and what they say is, “We are doing everything we can.”
From my meetings with these firms—these are meetings with lawyers in the room, so I have to be slightly careful—one of the things that I would warn the Committee about is that they tend to get you in and give you a presentation on all the technological reasons why they cannot stop scam adverts. My answer to them after about 30 seconds, having stopped what was meant to be an hour-long presentation, is, “I have not framed the fact that you need a technological solution. I have said you need a solution. If the answer to stopping scam adverts, and to stopping scams, is that you have to pre-vet every single advert, as old-fashioned media did, and that every advert that you put up has to have been vetted by a human being, so be it. You’re making it a function of technology, but let’s be honest: this is a function of profitability.” We have to look at the profitability of these companies when it comes to redress. What your job is—if you forgive me saying this—is to make sure that it costs them more money to let people be scammed than it does to stop people being scammed. If we solve that, we will have a lot fewer scams on social media and on the search advertising.
Rocio Concha: I completely agree with everything that Martin says. At the moment, the provisions in the Bill for “priority illegal content” require the platforms to publish reports that say, “This is how much illegal content we are seeing on the platform, and these are the measures that we are going to take.” They are also required to have a way for users to report it and to complain when they think that the platforms are not doing the right thing. At the moment, that does not apply to fraudulent advertising, so you have an opportunity to fix that in the Bill very easily, to at least get the transparency out there. The platform has to say, “We are finding this”—that puts pressure on the platform, because it is there and is also with the regulator—“and these are the measures that we are taking.” That gives us transparency to say, “Are these measures enough?” There should also be an easy way for the user to complain when they think that platforms are not doing the right thing. It is a complex question, but there are many things in the Bill that you can improve in order to improve the situation.
Tim Fassam: I wonder if it would be useful to give the Committee a case study. Members may be familiar with London Capital & Finance. Now, London Capital & Finance is one of the most significant recent scams. It sold mini-bonds fraudulently, at a very high advertised return, which then collapsed, with individuals losing all their money.
Those individuals were compensated through two vehicles. One was a Government Bill; so, they were compensated by the taxpayer. The others, because they were found to have been given financial advice despite LCF not having advice permissions or operating through a regulated product, went on to the Financial Services Compensation Scheme, which, among others, our members pay for; legitimate financial services companies pay for it. The most recent estimate is over £650 million. The expectation is that that will reach £1 billion at some point over the next few years, in terms of cost to the economy.
LCF was heavily driven by online advertising, and we would argue that the online platforms were in fact probably the only people who could have stopped it happening. They have profited from those adverts and they have not contributed anything to either of those two schemes. We would argue—possibly not for this Bill—that serious consideration should be given to the tech platforms being part of the financial services compensation scheme architecture and contributing to the costs of scams that individuals have fallen foul of, as an additional incentive for them to get on top of this problem.
Martin Lewis: That is a very important point, but I will just pick up on what Rocio was saying. One of the things that I would like to see, as well as much more rigid requirements of how reporting scams can be put in place—because I cannot see proper pre-vetting happening with these technology companies, but we can at least rely on social policing and reporting of scams. There are many people who recognise a scam, just as there are many people who do not recognise a scam.
However, I also think this is a wonderful opportunity to make sure that the method, the language and the symbols used for reporting scams are universal in the UK, so that whatever site you are on, if you see an advert you click the same symbol, and the process is unified and universal, and works in a very similar way, so that you can report a scam the same way on every site, which makes it simpler, and we can train people in how to do it and we can make the processes work.
Then, of course, we have to make sure that they act on the back of reports, but simply the various ways it is reported, and the complexity, and the number of clicks that you need to make mean it is a lot easier generally to click on an advert than it is to click to report an advert that is a scam. And with so many scams out there, I think there should be a parity of ease between those two factors.
Q
Rocio Concha: There were complaints from the users. At the moment, this Bill will not allow this for fraudulent advertising. So, we need to make sure that it is a requirement for the platforms to allow and to have an easy tool for people to complain and to report when they see something that is fraudulent. At the moment, the Bill does not do that. It is an easy fix; you can do it. And then the user will have that tool. It would also give us transparency for the regulator and for organisations such as ours, to see what is happening and to see what measures the platforms are taking.
Tim Fassam: I would agree with that. I would also highlight a particular problem that our members have flagged, and we have flagged directly with Meta and Instagram. Within the definition in the Bill of individuals who can raise concern about social media platforms, our members find they fall between two stools, because quite often what is happening is that people are claiming an association with a legitimate firm. So they will have a firm’s logo, or a firm’s web address, in their profile for their social media and then they will not directly claim to be a financial adviser but imply an association with a legitimate financial advice firm. This happens surprisingly frequently.
Our members find it incredibly difficult to get those accounts taken down, because it is not a fraudulent account; that individual is not pretending to be someone else and they are not the individual claiming pretence. They are not directly claiming to be an employee; they could just say they are a fan of the company. And they are not a direct victim of this individual. What happens is that when they report, it goes into a volume algorithm, and only if a very large number of complaints are made does that particular site get taken down. I think that could be expanded to include complaints from individuals affected by the account, rather than directly believing they are pretending to be that.
Mr Lewis, you were nodding.
Martin Lewis: I was nodding—I was smiling and thinking, “If it makes you feel any better, Tim, I have pictures of me that tell people to invest money that are clearly fake, because I don’t do any adverts, and it still is an absolute pain in the backside for me to get them taken down, having sued Facebook.” So, if your members want to feel any sense of comradeship, they are not alone in this; it is very difficult.
I think the interesting thing is about that volumetric algorithm. Of course, we go back to the fact that these big companies like to err on the side of making money and err away from the side of protecting consumers, because those two, when it comes to scams, are diametrically opposed. The sooner we tidy it up, the better. You could have a process where once there has been a certain number of reports—I absolutely get Tim’s point that in certain cases there is not a big enough volume—the advert is taken down and then the company has to proactively decide to put it back up and effectively say, “We believe this is a valid advert.” Then the system would certainly work better, especially if you bring down the required number of reports. At the moment, I think, there tends to be an erring on the side of, “Keep it up as long as it’s making us money, unless it absolutely goes over the top.”
Many tech experts have shown me adverts with my face in on various social media platforms. They say it would take them less than five minutes to write a program to screen them out, but those adverts continue to appear. We just have to be conscious here that—there is often a move towards self-regulation. Let me be plain, as I am giving evidence. I do not trust any of these companies to have the user and the consumer interest at heart when it comes to their advertising; what they have at heart is their own profits, so if we want to stop them, we have to make this Bill robust enough to stop them, because that is the only way it will stop. Do not rely on them trying to do good, because they are trying to make profit and they will err on the side of that over the side of protecting individuals from scam adverts.
Q
Frances Haugen: I want to reiterate that AI struggles to do even really basic tasks. For example, Facebook’s own document said that it only took down 0.8% of violence-inciting content. Let us look at a much broader category, such as content of democratic importance—if you include that in the Bill, I guarantee you that the platforms will come back to you and say that they have no idea how to implement the Bill. There is no chance that AI will do a good job of identifying content of democratic importance at any point in the next 30 years.
The second question is about carve-outs for media. At a minimum, we need to greatly tighten the standards for what counts as a publication. Right now, I could get together with a friend and start a blog and, as citizen journalists, get the exact same protections as an established, thoughtful, well-staffed publication with an editorial board and other forms of accountability. Time and again, we have seen countries such as Russia use small media outlets as part of their misinformation and disinformation strategies. At a minimum, we need to really tighten that standard.
We have even seen situations where they will use very established publications, such as CNN. They will take an article that says, “Ukrainians destroyed a bunch of Russian tanks,” and intentionally have their bot networks spread that out. They will just paste the link and say, “Russia destroyed a bunch of tanks.” People briefly glance at the snippet, they see the picture of the tank, they see “CNN”, and they think, “Ah, Russia is winning.” We need to remember that even real media outlets can be abused by our enemies to manipulate the public.
Q
Frances Haugen: It is important for people to understand what anonymity really is and what it would really mean to have confirmed identities. Platforms already have a huge amount of data on their users. We bleed information about ourselves on to these platforms. It is not about whether the platforms could identify people to the authorities; it is that they choose not to do that.
Secondly, if we did, say, mandate IDs, platforms would have two choices. The first would be to require IDs, so that every single user on their platform would have to have an ID that is verifiable via a computer database—you would have to show your ID and the platform would confirm it off the computer. Platforms would suddenly lose users in many countries around the world that do not have well-integrated computerised databases. The platforms will come back to you and say that they cannot lose a third or half of their users. As long as they are allowed to have users from countries that do not have those levels of sophisticated systems, users in the UK will just use VPNs—a kind of software that allows you to kind of teleport to a different place in the world—and pretend to be users from those other places. Things such as ID identification are not very effective.
Lastly, we need to remember that there is a lot of nuance in things like encryption and anonymity. As a whistleblower, I believe there is a vital need for having access to private communications, but I believe we need to view these things in context. There is a huge difference between, say, Signal, which is open source and anyone in the world can read the code for it—the US Department of Defence only endorses Signal for its employees, because it knows exactly what is being used—and something like Messenger. Messenger is very different, because we have no idea how it actually works. Facebook says, “We use this protocol,” but we cannot see the code; we have no idea. It is the same for Telegram; it is a private company with dubious connections.
If people think that they are safe and anonymous, but they are not actually anonymous, they can put themselves at a lot of risk. The secondary thing is that when we have anonymity in context with more sensitive data—for example, Instagram and Facebook act like directories for finding children—that is a very different context for having anonymity and privacy from something like Signal, where you have to know someone’s phone number in order to contact them.
These things are not cut-and-dried, black-or-white issues. I think it is difficult to have mandatory identity. I think it is really important to have privacy. We have to view them in context.
Q
Frances Haugen: I think that shows a commendable level of chutzpah. Researchers have been trying to get really basic datasets out of Facebook for years. When I talk about a basic dataset, it is things as simple as, “Just show us the top 10,000 links that are distributed in any given week.” When you ask for information like that in a country like the United States, no one’s privacy is violated: every one of those links will have been viewed by hundreds of thousands, if not millions of people. Facebook will not give out even basic data like that, even though hundreds if not thousands of academics have begged for this data.
The idea that they have worked in close co-operation with researchers is a farce. The only way that they are going to give us even the most basic data that we need to keep ourselves safe is if it is mandated in the Bill. We need to not wait two years after the Bill passes—and remember, it does not even say that it will happen; Ofcom might say, “Oh, maybe not.” We need to take a page from the Digital Services Act and say, “On the day that the Bill passes, we get access to data,” or, at worst, “Within three months, we are going to figure out how to do it.” It needs to be not, “Should we do it?” but “How will we do it?”