Online Safety Bill (Fourth sitting) Debate
Full Debate: Read Full DebateAlex Davies-Jones
Main Page: Alex Davies-Jones (Labour - Pontypridd)Department Debates - View all Alex Davies-Jones's debates with the Department for Digital, Culture, Media & Sport
(2 years, 5 months ago)
Public Bill CommitteesGood afternoon, ladies and gentlemen. We are now sitting in public and the proceedings are being broadcast. Thank you all for joining us.
We will now hear oral evidence from Stephen Almond, the director of technology and innovation in the Information Commissioner’s Office. Mr Almond, thank you for coming. As I have introduced you, I am not going to ask you to introduce yourself, so we can go straight into the questions. I call the shadow Front-Bench spokesman.
Q
Stephen Almond: Let me start by saying that the ICO warmly welcomes the Bill and its mission to make the UK the safest place in the world to be online. End-to-end encryption supports the security and privacy of online communication and keeps people safe online, but the same characteristics that create a private space for the public to communicate can also provide a safe harbour for more malicious actors, and there are valid concerns that encrypted channels may be creating spaces where children are at risk.
Our view is that the Bill has the balance right. All services in scope, whether encrypted or not, must assess the level of risk that they present and take proportionate action to address it. Moreover, where Ofcom considers it necessary and proportionate, it will have the power to issue technology notices to regulated services to require them to deal with child sexual abuse and exploitation material. We think this presents a proportionate way of addressing the risk that is present on encrypted channels.
It is worth saying that I would not favour provisions that sought to introduce some form of outright ban on encryption in a generalised way. It is vital that the online safety regime does not seek to trade off one sort of online safety risk for another. Instead, I urge those advancing more fundamentalist positions around privacy or safety to move towards the question of how we can incentivise companies to develop technological innovation that will enable the detection of harmful content without compromising privacy. It is one reason why the ICO has been very pleased to support the Government’s safety tech challenge, which has really sought to incentivise the development of technological innovation in this area. Really what we would like to see is progress in that space.
Q
Stephen Almond: First and foremost, it is incredibly important that the Bill has the appropriate flexibility to enable Ofcom as the online safety regulator to be agile in responding to technological advances and novel threats in this area. I think the question of VPNs is ultimately going to be one that Ofcom and the regulator services themselves are going to have to work around. VPNs play an important role in supporting a variety of different functions, such as the security of communications, but ultimately it is going to be critical to make sure that services are able to carry out their duties. That is going to require some questions to be asked in this area.
Q
Stephen Almond: Transparency is a key foundation of data protection law in and of itself. As the regulator in this space, I would say that there is a significant emphasis within the data protection regime on ensuring that companies are transparent about the processing of personal data that they undertake. We think that that provides proportionate safeguards in this space. I would not recommend an amendment to the Bill on this point, because I would be keen to avoid duplication or an overlap between the regimes, but it is critical; we want companies to be very clear about how people’s personal data is being processed. It is an area that we are going to continue to scrutinise.
May I ask a supplementary to that before I come on to my main question?
Moving, I hope, seamlessly on, we are now going to hear oral evidence from Sanjay Bhandari, who is the chairman of Kick It Out, and—as the Committee agreed this morning—after Tuesday’s technical problems, if we do not have further technical problems, we are going to hear from Lynn Perry from Barnardo’s, again by Zoom. Is Lynn Perry on the line? [Interruption.] Lynn Perry is not on the line. We’ve got pictures; now all we need is Lynn Perry in the pictures.
I am afraid we must start, but if Lynn Perry is able to join, we will be delighted to hear from her. We have Mr Bhandari, so we will press on, because we are very short of time as it is. We hope that Lynn Perry will join us.
Q
Sanjay Bhandari: I think you would have to ask them why it has not been tackled. My perception of their reaction is that it has been a bit like the curate’s egg: it has been good in parts and bad in parts, and maybe like the original meaning of that allegory, it is a polite way of saying something is really terrible.
Before the abuse from the Euros, actually, we convened a football online hate working group with the social media companies. They have made some helpful interventions: when I gave evidence to the Joint Committee, I talked about wanting to have greater friction in the system, and they are certainly starting to do that with things like asking people, “Do you really want to send this?” before they post something. We understand that that is having some impact, but of course, it is against the backdrop of a growing number of trolls online. Also, we have had some experiences where we make a suggestion, around verification for instance, where we are introducing third-party companies to social media companies, and very often the response we get is different between London and California. London will say “maybe”, and California then says “no”. I have no reason to distrust the people we meet locally here, but I do not think they always have the power to actually help and respond. The short answer is that there are certainly good intentions from the people we meet locally and there is some action. However, the reality is that we still see quite a lot of content coming through.
Q
Sanjay Bhandari: I think we need to work that through. I am sorry that my colleagues from the Premier League and the Football Association could not be here today; I did speak to them earlier this week but unfortunately they have got some clashes. One thing we are talking about is how we tag this new framework to exist in content. We have a few hundred complaints that the Premier League investigates, and we have got a few thousand items that are proactively identified by Signify, working with us and the Professional Footballers’ Association. Our intention is to take that data and map it to the new framework and say, “Is this caught? What is caught by the new definition of harm? What is caught by priority illegal content? What is caught by the new communication offences, and what residue in that content might be harmful to adults?” We can then peg that dialogue to real-world content rather than theoretical debate. We know that a lot of complaints we receive are in relation to direct messaging, so we are going to do that exercise. It may take us a little bit of time, but we are going to do that.
Lynn Perry is on the line, but we have lost her for the moment. I am afraid we are going to have to press on.
We will hear oral evidence first from Eva Hartshorn-Sanders, who is the head of policy at the Centre for Countering Digital Hate. We shall be joined in due course by Poppy Wood. Without further ado, I call the shadow Minister.
Q
Eva Hartshorn-Sanders: That is obviously an important area. The main mechanism to look at are the complaints pathways and ensuring that when reports are made, action is taken, and that that is included in risk assessments as well. In our “Hidden Hate” report, we found that 90% of misogynist abuse, which included quite serious sexual harassment and abuse, videos and death threats, was not acted on by Instagram, even when we used the current pathways for the complainant. This is an important area.
Q
Eva Hartshorn-Sanders: There has to be human intervention as part of that process as well. Whatever system is in place—the relationship between Ofcom and the provider is going to vary by platform and by search provider too, possibly—if you are making those sorts of decisions, you want to have it adequately resourced. That is what we are saying is not happening at the moment, partly because there is not yet the motivation or the incentives there for them to be doing any differently. They are doing the minimum; what they say they are going to do often comes out through press releases or policies, and then it is not followed through.
Q
Eva Hartshorn-Sanders: I think there is a role for independent civil society, working with the regulator, to hold those companies to account and to be accessing that data in a way that can be used to show how they are performing against their responsibilities under the Bill. I know Poppy from Reset.tech will talk to this area a bit more. We have just had a global summit on online harms and misinformation. Part of the outcome of that was looking at a framework for how we evaluate global efforts at legislation and the transparency of algorithms and rules enforcement, and the economics that are driving online harms and misinformation. That is an essential part of ensuring that we are dealing with the problems.
May I say, for the sake of the record, that we have now been joined by Poppy Wood, the UK director of Reset.tech? Ms Wood, you are not late; we were early. We are trying to make as much use as we can of the limited time. I started with the Opposition Front Bencher. If you have any questions for Poppy Wood, go ahead.
Q
Poppy Wood: I did not hear your first set of questions—I apologise.
That is fine. I will just ask you what you think the impact is of the decision to remove misinformation and disinformation from the scope of the Bill, particularly in relation to state actors?
Poppy Wood: Thank you very much, and thank you for having me here today. There is a big question about how this Bill tackles co-ordinated state actors—co-ordinated campaigns of disinformation and misinformation. It is a real gap in the Bill. I know you have heard from Full Fact and other groups about how the Bill can be beefed up for mis- and disinformation. There is the advisory committee, but I think that is pretty weak, really. The Bill is sort of saying that disinformation is a question that we need to explore down the line, but we all know that it is a really live issue that needs to be tackled now.
First of all, I would make sure that civil society organisations are on that committee and that its report is brought forward in months, not years, but then I would say there is just a real gap about co-ordinated inauthentic behaviour, which is not referenced. We are seeing a lot of it live with everything that is going on with Russia and Ukraine, but it has been going on for years. I would certainly encourage the Government to think about how we account for some of the risks that the platforms promote around co-ordinated inauthentic behaviour, particularly with regard to disinformation and misinformation.
Q
Poppy Wood: Absolutely, and I agree with what was said earlier, particularly by groups such as HOPE not hate and Antisemitism Policy Trust. There are a few ways to do this, I suppose. As we are saying, at the moment the small but high-risk platforms just are not really caught in the current categorisation of platforms. Of course, the categories are not even defined in the Bill; we know there are going to be categories, but we do not know what they will be.
I suppose there are different ways to do this. One is to go back to where this Bill started, which was not to have categories of companies at all but to have a proportionality regime, where depending on your size and your functionality you had to account for your risk profile, and it was not set by Ofcom or the Government. The problem of having very prescriptive categories—category 1, category 2A, category 2B—is, of course, that it becomes a race to the bottom in getting out of these regulations without having to comply with the most onerous ones, which of course are category 1.
There is also a real question about search. I do not know how they have wriggled out of this, but it was one of the biggest surprises in the latest version of the Bill that search had been given its own category without many obligations around adult harm. I think that really should be revisited. All the examples that were given earlier today are absolutely the sort of thing we should be worrying about. If someone can google a tractor in their workplace and end up looking at a dark part of the web, there is a problem with search, and I think we should be thinking about those sorts of things. Apologies for the example, but it is a really, really live one and it is a really good thing to think about how search promotes these kinds of content.
Q
Eva Hartshorn-Sanders: Are you specifically asking about the takedown notices and the takedown powers?
Q
Owen Meredith: You may be aware that we submitted evidence to the Joint Committee that did prelegislative scrutiny of the draft Bill, because we think that although the Government’s stated intention to have content from recognised news media publishers, who I represent, outside the scope of the Bill, we do not believe that the drafting, as it was and still is, achieves that. Ministers and the Secretary of State have confirmed, both in public appearances and on Second Reading, that they wish to table further amendments to achieve the aim that the Government have set out, which is to ensure that content from recognised news publishers is fully out of scope of the Bill. It needs to go further, but I understand that there will be amendments coming before you at some point to achieve that.
Q
Owen Meredith: I would like to see a full exemption for recognised news publisher content.
Q
Matt Rogerson: Yes. I would step back a bit and point to the evidence that a few of your witnesses gave today and Tuesday. I think Fair Vote gave evidence on this point. At the moment, our concern is that we do not know what the legal but harmful category of content that will be included in the Bill will look like. That is clearly going to be done after the event, through codes of practice. There is definitely a danger that news publisher content gets caught by the platforms imposing that. The reason for having a news publisher exemption is to enable users of platforms such as Facebook, Twitter and others to access the same news as they would via search. I agree with Owen’s point. I think the Government are going in the right direction with the exemption for broadcasters such as the BBC, The Times and The Guardian, but we would like to see it strengthened a bit to ensure a cast-iron protection.
Q
Matt Rogerson: I think it is quite difficult for platforms to interpret that. It is a relatively narrow version of what journalism is—it is narrower than the article 10 description of what journalism is. The legal definitions of journalism in the Official Secrets Act and the Information Commissioner’s Office journalism code are slightly more expansive and cover not just media organisations but acts of journalism. Gavin Millar has put together a paper for Index on Censorship, in which he talks about that potentially being a way to expand the definition slightly.
The challenge for the platforms is, first, that they have to take account of journalistic content, and there is not a firm view of what they should do with it. Secondly, defining what a piece of journalism or an act of journalism is takes a judge, generally with a lot of experience. Legal cases involving the media are heard through a specific bench of judges—the media and communications division—and they opine on what is and is not an act of journalism. There is a real challenge, which is that you are asking the platforms to—one assumes—use machine learning tools to start with to identify what is a potential act of journalism. Then an individual, whether they are based in California or, more likely, outsourced via an Accenture call centre, then determines within that whether it is an act of journalism and what to do with it. That does place quite a lot of responsibility on the platforms to do that. Again, I would come back to the fact that I think if the Bill was stripped back to focus on illegal content, rather than legal but harmful content, you would have less of these situations where there was concern that that sort of content was going to be caught.
Q
Matt Rogerson: Yes, a few. The first thing that is missing from the Bill is a focus on advertising. The reason we should focus on advertising is that that is why a lot of people get involved in misinformation. Ad networks at the moment are able to channel money to “unknown” sites in ways that mean that disinformation or misinformation is highly profitable. For example, a million dollars was spent via Google’s ad exchanges in the US; the second biggest recipient of that million dollars was “Unknown sites”—sites that do not categorise themselves as doing anything of any purpose. You can see how the online advertising market is channelling cash to the sort of sites that you are talking about.
In terms of state actors, and how they relate to the definition, the definition is set out quite broadly in the Bill, and it is more lengthy than the definition in the Crime and Courts Act 2013. On top of that definition, Ofcom would produce guidance, which is subject to a full and open public consultation, which would then work out how you are going to apply the definition in practice. Even once you have that guidance in place, there will be a period of case law developing where people will appeal to be inside of that exemption and people will be thrown out of that exemption. Between the platforms and Ofcom, you will get that iteration of case law developing. So I suppose I am slightly more confident that the exemption would work in practice and that Ofcom could find a workable way of making sure that bad actors do not make use of it.
Mr Meredith, do you wish to add to that?
Owen Meredith: No, I would echo almost entirely what Matt has said on that. I know you are conscious of time.
Q
Matt Rogerson: On the Russia Today problem, I think Russia Today had a licence from Ofcom, so the platforms probably took their cue from the fact that Russia Today was beamed into British homes via Freeview. Once that changed, the position of having their content available on social media changed as well. Ultimately, if it was allowed to go via broadcast, if it had a broadcast licence, I would imagine that social media companies took that as meaning that it was a—
Q
Matt Rogerson: I think that would be subject to the guidance that Ofcom creates and the consultation on that guidance. I do not believe that Russia Today would be allowed under the definitions. If it is helpful, I could write to you to set out why.
We will now hear from Tim Fassam, the director of government relations and policy at PIMFA, the Personal Investment Management & Financial Advice Association, and from Rocio Concha, director of policy and advocacy at Which? We will be joined by Martin Lewis, of MoneySavingExpert, in due course. Thank you to the witnesses for joining us. I call the Opposition Front Bench.
Q
Rocio Concha: This Bill is very important in tackling fraud. It is very important for Which? We were very pleased when fraud was included to tackle the issue that you mentioned and also when paid-for advertising was included. It was a very important step, and it is a very good Bill, so we commend DCMS for producing it.
However, we have found some weakness in the Bill, and those can be solved with very simple amendments, which will have a big impact on the Bill in terms of achieving its objective. For example, at the moment in the Bill, search engines such as Google and Yahoo! are not subject to the same duties in terms of protecting consumers from fraudulent advertising as social media platforms are. There is no reason for Google and Yahoo! to have weaker duties in the Bill, so we need to solve that.
The second area is booster content. Booster content is user-generated content, but it is also advertising. In the current definition of fraudulent advertising in the Bill, booster content is not covered. For example, if a criminal makes a Facebook page and starts publishing things about fake investments, and then he pays Facebook to boost that content in order to reach more people, the Bill, at the moment, does not cover that fraudulent advertising.
The last part is that, at the moment, the risk checks that platforms need to do for priority illegal content, the transparency reporting that they need to do to basically say, “We are finding this illegal content and this is what we are doing about it,” and the requirement to have a way for users to tell them about illegal content or complain about something that they are not doing to tackle this, only apply to priority illegal content. They do not apply to fraudulent advertising, but we think they need to.
Paid-for advertising is the most expensive way that criminals have to reach out to a lot of people. The good news, as I said before, is that this can be solved with very simple amendments to the Bill. We will send you suggestions for those amendments and, if we fix the problem, we think the Bill will really achieve its objective.
One moment—I think we have been joined by Martin Lewis on audio. I hope you can hear us, Mr Lewis. You are not late; we started early. I will bring you in as soon as we have you on video, preferably, but otherwise on audio.
Tim Fassam: I would echo everything my colleague from Which? has said. The industry, consumer groups and the financial services regulators are largely in agreement. We were delighted to see fraudulent advertising and wider issues of economic crime included in the Bill when they were not in the initial draft. We would also support all the amendments that Which? are putting forward, especially the equality between search and social media.
Our members compiled a dossier of examples of fraudulent activity, and the overwhelming examples of fraudulent adverts were on search, rather than social media. We would also argue that search is potentially higher risk, because the act of searching is an indication that you may be ready to take action. If you are searching “invest my pension”, hopefully you will come across Martin’s site or one of our members’ sites, but if you come across a fraudulent advert in that moment, you are more likely to fall foul of it.
We would also highlight two other areas where we think the Bill needs further work. These are predominantly linked to the interaction between Ofcom, the police and the Financial Conduct Authority, because the definitions of fraudulent adverts and fraudulent behaviour are technical and complex. It is not reasonable to expect Ofcom to be able to ascertain whether an advert or piece of content is in breach of the Financial Services and Markets Act 2000; that is the FCA’s day job. Is it fraud? That is Action Fraud’s and the police’s day job. We would therefore suggest that the Bill go as far as allowing the police and the FCA to direct Ofcom to have content removed, and creating an MOU that enables Ofcom to refer things to the FCA and the police for their expert analysis of whether it breaches those definitions of fraudulent adverts or fraudulent activity.
Q
Rocio Concha: Yes. Open-display advertising is not part of the Bill. That also needs to be tackled. I think the online advertising programme should be considered, to tackle this issue. I agree with you: this is a very important step in the right direction, and it will make a huge difference if we fix this small weakness in terms of the current scope. However, there are still areas out there that need to be tackled.
Mr Lewis, I am living in hope that we may be able to see you soon—although that may be a forlorn hope. However, I am hoping that you can hear us. Do you want to come in and comment at all at this point? [Interruption.] Oh, we have got you on the screen. Thank you very much for joining us.
Martin Lewis: Hurrah. I am so sorry, everybody—for obvious reasons, it has been quite a busy day on other issues for me, so you’ll forgive me.
I can’t think why it has been.
Martin Lewis: I certainly agree with the other two witnesses. Those three issues are all very important to be brought in. From a wider perspective, I was vociferously campaigning to have scam adverts brought within the scope of the Online Safety Bill. I am delighted that that has happened, but let us be honest among ourselves: it is far from a panacea.
Adverts and scams come in so many places—on social media, in search engines and in display advertising, which is very common and is not covered. While I accept that the online advertising programme will address that, if I had my way I would be bringing it all into the Online Safety Bill. However, the realpolitik is that that is not going to happen, so we have to have the support in the OAP coming later.
It is also worth mentioning just for context that, although I think there is little that we can do about this—or it would take brighter people than me—one of the biggest routes for scams is email. Everybody is being emailed—often with my face, which is deeply frustrating. We have flaccid policing of what is going on on social media, and I hope the Bill will improve it, but at least there is some policing, even though it is flaccid, and it is the same on search engines. There is nothing on email, so whatever we do in this Bill, it will not stop scams reaching people. There are many things that would improve that, certainly including far better resourcing for policing so that people who scam individuals get at least arrested and possibly even punished and sentenced. Of course, that does not happen at the moment, because scamming is a crime that you can undertake with near impunity.
There is a lot that needs to be done to make the situation work, but in general the moves in the Online Safety Bill to include scam advertising are positive. I would like to see search engines and display advertising brought into that. I absolutely support the call for the FCA to be involved, because what is and is not a scam can certainly be complicated. There are more obvious ones and less obvious ones. We saw that with the sale of bonds at 5% or 6%, which pretend to be deposit bonds but are nothing of the sort. That might get a bit more difficult for Ofcom, and it would be great to see the regulator involved. I support all the calls of the other witnesses, but we need to be honest with ourselves: even if we do all that, we are still a long way from seeing the back of all scam adverts and all scams.
Q
Rocio Concha: To be honest, in this area we do not have any specific proposals. I completely agree with you that this is an area that needs to be tackled, but I do not have a specific proposal for this Bill.
Tim Fassam: This is an area that we have raised with the Financial Conduct Authority—particularly the trend for financial advice TikTok and adverts for non-traditional investments, such as whisky barrels or wine, which do not meet the standards required by the FCA for other investment products. That is also true of a number of cryptocurrency adverts and formats. We have been working with the FCA to try to identify ways to introduce more consistency in the application of the rule. There has been a welcome expansion by the Treasury on the promotion of high-risk investments, which is now a regulated activity in and of itself.
I go back to my initial point. We do not believe that there is any circumstance in which the FCA would want content in any place taken down where that content should not be removed, because they are the experts in identifying consumer harm in this space.
Q
Martin Lewis: I still believe that most of this comes down to an issue of policing. The rules are there and are not being enforced strongly enough. The people who have to enforce the rules are not resourced well enough to do that. Therefore, you get people who are able to work around the rules with impunity.
Advertising in the UK, especially online, has been the wild west for a very long time, and it will continue to be so for quite a while. The Advertising Standards Authority is actually better at dealing with the influencer issue, because of course it is primarily strong at dealing with people who listen to the Advertising Standards Authority. It is not very good at dealing with criminal scammers based outside the European Union, who frankly cannot be bothered and will not reply—they are not going to stop—but it is better at dealing with influencers who have a reputation.
We all know it is still extremely fast and loose out there. We need to adequately resource it; putting rules and laws in place is only one step. Resourcing the policing and the execution of those rules and laws is a secondary step, and I have doubts that we will ever quite get there, because resources are always squeezed and put on the back burner.
Thank you. Do I have any questions from Government Back Benchers? No. Does anyone have any further questions?
Yes, I do. If nobody else has questions, I will have another bite of the cherry.
Q
Martin Lewis: As you will know, I had to sue Facebook for defamation, which is a ridiculous thing to do in order to stop scam adverts. I was unable to report the scam adverts to the police, because I had not been scammed—even though it was my face that was in them—and many victims were not willing to come forward. That is a rather bizarre situation, and we got Facebook to put forward £3 million to set up Citizens Advice Scam Action—that is what I settled for, as well as a scam ad reporting tool.
There are two levels here. The problem is who is at fault. Of course, those mainly at fault for scams are the scammers. They are criminals and should be prosecuted, but not enough of them are. You have times when it is the bank’s fault. If a company has not put proper precautions in place, and people have got scammed because it has put up adverts or posts that it should have prevented, they absolutely need to have some responsibility for that. I think you will struggle to have a direct redress system put in place. I would like to see it, but it would be difficult.
It is rather interesting to me that I am worried that the £3 million for Citizens Advice Scam Action, which was at least meant to provide help and support for victims of scams, is going to run out. I have not seen any more money coming from Facebook, Google or any of the other big players out there. If we are not going to fund direct redress, we could at least make sure that they fund a collective form of redress and help for the victims of scams, as a bare minimum. It is very strange that these firms go so quiet on this, and what they say is, “We are doing everything we can.”
From my meetings with these firms—these are meetings with lawyers in the room, so I have to be slightly careful—one of the things that I would warn the Committee about is that they tend to get you in and give you a presentation on all the technological reasons why they cannot stop scam adverts. My answer to them after about 30 seconds, having stopped what was meant to be an hour-long presentation, is, “I have not framed the fact that you need a technological solution. I have said you need a solution. If the answer to stopping scam adverts, and to stopping scams, is that you have to pre-vet every single advert, as old-fashioned media did, and that every advert that you put up has to have been vetted by a human being, so be it. You’re making it a function of technology, but let’s be honest: this is a function of profitability.” We have to look at the profitability of these companies when it comes to redress. What your job is—if you forgive me saying this—is to make sure that it costs them more money to let people be scammed than it does to stop people being scammed. If we solve that, we will have a lot fewer scams on social media and on the search advertising.
Rocio Concha: I completely agree with everything that Martin says. At the moment, the provisions in the Bill for “priority illegal content” require the platforms to publish reports that say, “This is how much illegal content we are seeing on the platform, and these are the measures that we are going to take.” They are also required to have a way for users to report it and to complain when they think that the platforms are not doing the right thing. At the moment, that does not apply to fraudulent advertising, so you have an opportunity to fix that in the Bill very easily, to at least get the transparency out there. The platform has to say, “We are finding this”—that puts pressure on the platform, because it is there and is also with the regulator—“and these are the measures that we are taking.” That gives us transparency to say, “Are these measures enough?” There should also be an easy way for the user to complain when they think that platforms are not doing the right thing. It is a complex question, but there are many things in the Bill that you can improve in order to improve the situation.
Tim Fassam: I wonder if it would be useful to give the Committee a case study. Members may be familiar with London Capital & Finance. Now, London Capital & Finance is one of the most significant recent scams. It sold mini-bonds fraudulently, at a very high advertised return, which then collapsed, with individuals losing all their money.
Those individuals were compensated through two vehicles. One was a Government Bill; so, they were compensated by the taxpayer. The others, because they were found to have been given financial advice despite LCF not having advice permissions or operating through a regulated product, went on to the Financial Services Compensation Scheme, which, among others, our members pay for; legitimate financial services companies pay for it. The most recent estimate is over £650 million. The expectation is that that will reach £1 billion at some point over the next few years, in terms of cost to the economy.
LCF was heavily driven by online advertising, and we would argue that the online platforms were in fact probably the only people who could have stopped it happening. They have profited from those adverts and they have not contributed anything to either of those two schemes. We would argue—possibly not for this Bill—that serious consideration should be given to the tech platforms being part of the financial services compensation scheme architecture and contributing to the costs of scams that individuals have fallen foul of, as an additional incentive for them to get on top of this problem.
Martin Lewis: That is a very important point, but I will just pick up on what Rocio was saying. One of the things that I would like to see, as well as much more rigid requirements of how reporting scams can be put in place—because I cannot see proper pre-vetting happening with these technology companies, but we can at least rely on social policing and reporting of scams. There are many people who recognise a scam, just as there are many people who do not recognise a scam.
However, I also think this is a wonderful opportunity to make sure that the method, the language and the symbols used for reporting scams are universal in the UK, so that whatever site you are on, if you see an advert you click the same symbol, and the process is unified and universal, and works in a very similar way, so that you can report a scam the same way on every site, which makes it simpler, and we can train people in how to do it and we can make the processes work.
Then, of course, we have to make sure that they act on the back of reports, but simply the various ways it is reported, and the complexity, and the number of clicks that you need to make mean it is a lot easier generally to click on an advert than it is to click to report an advert that is a scam. And with so many scams out there, I think there should be a parity of ease between those two factors.
Q
Rocio Concha: There were complaints from the users. At the moment, this Bill will not allow this for fraudulent advertising. So, we need to make sure that it is a requirement for the platforms to allow and to have an easy tool for people to complain and to report when they see something that is fraudulent. At the moment, the Bill does not do that. It is an easy fix; you can do it. And then the user will have that tool. It would also give us transparency for the regulator and for organisations such as ours, to see what is happening and to see what measures the platforms are taking.
Tim Fassam: I would agree with that. I would also highlight a particular problem that our members have flagged, and we have flagged directly with Meta and Instagram. Within the definition in the Bill of individuals who can raise concern about social media platforms, our members find they fall between two stools, because quite often what is happening is that people are claiming an association with a legitimate firm. So they will have a firm’s logo, or a firm’s web address, in their profile for their social media and then they will not directly claim to be a financial adviser but imply an association with a legitimate financial advice firm. This happens surprisingly frequently.
Our members find it incredibly difficult to get those accounts taken down, because it is not a fraudulent account; that individual is not pretending to be someone else and they are not the individual claiming pretence. They are not directly claiming to be an employee; they could just say they are a fan of the company. And they are not a direct victim of this individual. What happens is that when they report, it goes into a volume algorithm, and only if a very large number of complaints are made does that particular site get taken down. I think that could be expanded to include complaints from individuals affected by the account, rather than directly believing they are pretending to be that.
We now have Frances Haugen, a former Facebook employee. Thank you for joining us.
Q
Frances Haugen: Thank you so much for inviting me.
No problem. Could you give us a brief overview of how, in your opinion, platforms such as Meta will be able to respond to the Bill if it is enacted in its current form?
Frances Haugen: There are going to be some pretty strong challenges in implementing the Bill as it is currently written. I want to be really honest with you about the limitations of artificial intelligence. We call it artificial intelligence, but people who actually build these systems call it machine learning, because it is not actually intelligent. One of the major limitations in the Bill is that there are carve-outs, such as “content of democratic importance”, that computers will not be able to distinguish. That might have very serious implications. If the computers cannot differentiate between whether something is or is not hate speech, imagine a concept even more ambiguous that requires even more context, such as defining what is of democratic importance. If we have carve-outs like that, it may actually prevent the platforms from doing any content moderation, because they will never know whether a piece of content is safe or not safe.
Q
Frances Haugen: I think it is unacceptable that large corporations such as this do not answer very basic questions. I guarantee you that they know exactly how many moderators they have hired—they have dashboards to track these numbers. The fact that they do not disclose those numbers shows why we need to pass laws to have mandatory accountability. The role of moderators is vital, especially for things like people questioning judgment decisions. Remember, no AI system is going to be perfect, and one of the major ways people can have accountability is to be able to complain and say, “This was inaccurately judged by a computer.” We need to ensure that there is always enough staffing and that moderators can play an active role in this process.
Q
Frances Haugen: All industries that live in democratic societies must live within democratic processes, so I do believe that it is absolutely essential that we the public, through our democratic representatives like yourself, have mandatory transparency. The only two other paths I currently see towards getting any transparency out of Meta, because Meta has demonstrated that it does not want to give even the slightest slivers of data—for example, how many moderators there are—are via ESG, so we can threaten then with divestment by saying, “Prosocial companies are transparent with their data,” and via litigation. In the United States, sometimes we can get data out of these companies through the discovery process. If we want consistent and guaranteed access to data, we must put it in the Bill, because those two routes are probabilistic—we cannot ensure that we will get a steady, consistent flow of data, which is what we need to have these systems live within a democratic process.
Q
Frances Haugen: I am not well versed on the exact provisions in the Bill regarding child safety. What I can say is that one of the most important things that we need to have in there is transparency around how the platforms in general keep children under the age of 13 off their systems—transparency on those processes—because we know that Facebook is doing an inadequate job. That is the single biggest lever in terms of child safety.
I have talked to researchers at places like Oxford and they talk about how, with social media, one of the critical windows is when children transition through puberty, because they are more sensitive on issues, they do not have great judgment yet and their lives are changing in really profound ways. Having mandatory transparency on what platforms are doing to keep kids off their platforms, and the ability to push for stronger interventions, is vital, because keeping kids off them until they are at least 13, if not 16, is probably the biggest single thing we can do to move the ball down the field for child safety.