Online Safety Bill (Fourth sitting) Debate
Full Debate: Read Full DebateChris Philp
Main Page: Chris Philp (Conservative - Croydon South)Department Debates - View all Chris Philp's debates with the Department for Digital, Culture, Media & Sport
(2 years, 6 months ago)
Public Bill CommitteesQ
Stephen Almond: Thank you very much. I will start by explaining the Digital Regulation Cooperation Forum. It is a voluntary, not statutory, forum that brings together ourselves, Ofcom, the Competition and Markets Authority and the Financial Conduct Authority—some of the regulators with the greatest interest in digital regulation—to make sure that we have a coherent approach to the regulation of digital services in the interests of the public and indeed the economy.
We are brought together through our common interest. We do not require a series of duties or statutory frameworks to make us co-operate, because the case for co-operation is very, very clear. We will deliver better outcomes by working together and by joining up where our powers align. I think that is what you are seeing in practice in some of the work we have done jointly—for example, around the implementation of the children’s code alongside Ofcom’s implementation of the video-sharing platform regime. A joined-up approach to questions about, for example, how you assure the age of children online is really important. That gives me real confidence in reassuring the Committee that the ICO, Ofcom and other digital regulators will be able to take a very joined-up approach to regulating in the context of the new online safety regime.
Q
Stephen Almond: In our view, the Bill strikes an appropriate balance between privacy and online safety. The duties in the Bill should leave service providers in no doubt that they must comply with data protection law, and that they should guard against unwarranted intrusion of privacy. In my discourse with firms, I am very clear that this is not a trade-off between online safety and privacy: it is both. We are firmly expecting that companies take that forward and work out how they are going to adopt both a “privacy by design” and a “safety by design” approach to the delivery of their services. They must deliver both.
Q
Stephen Almond: In brief, yes. We feel that the Bill has been designed to work alongside data protection law, for which we remain the statutory regulator, but with appropriate mechanisms for co-operation with the ICO—so, with this series of consultation duties where codes of practice or guidance that could be issued by Ofcom may have an impact on privacy. We think that is the best way of assuring regulatory coherence in this area.
Mr Almond, we are trying to get a pint into a half-pint pot doing this, so we are rushing a bit. If, when you leave the room, you have a “I wish I’d said that” moment, please feel free to put it in writing to us. We are indebted to you. Thank you very much indeed.
Examination of Witnesses
Sanjay Bhandari and Lynn Perry gave evidence.
Q
Lynn Perry: As a recommendation, we think that could only strengthen the protections of children.
Q
Lynn Perry: We would welcome provision to be able to bring particularly significant evidence of concern. That is certainly something that organisations, large charities in the sector and those responsible for representing the rights of children and young people would welcome. On some of these issues, we work in coalition to make representations on behalf of children and young people, as well as of parents and carers, who also raise some concerns. The ability to do that and to strengthen the response is something that would be welcomed.
Q
Sanjay Bhandari: Our beneficiaries are under-represented or minority communities in sports. I agree, I think that the Bill goes a substantial way to protecting them and to dealing with some of the issues that we saw most acutely after the Euro 2020 finals.
We have to look at the Bill in context. This is revolutionary legislation, which we are not seeing anywhere else in the world. We are going first. The basic sanctions framework and the 10% fines I have seen working in other areas—anti-trust in particular. In Europe, that has a long history. The definition of harm being in the manner of dissemination will pick up pile-ons and some forms of trolling that we see a lot of. Hate crime being designated as priority illegal content is a big one for us, because it puts the proactive duty on the platforms. That too will take away quite a lot of content, we think. The new threatening communications offence we have talked about will deal with rape and death threats. Often the focus is on, quite rightly, the experience of black professional footballers, but there are also other people who play, watch and work in the game, including our female pundits and our LGBT fan groups, who also get loads of this abuse online. The harm-based offence—communications sent to cause harm without reasonable excuse—will likely cover things such as malicious tagging and other forms of trolling. I have already talked about the identification, verification and anonymity provisions.
I think that the Bill will go a substantial way. I am still interested in what fits into that residual category of content harmful to adults, but rather than enter into an arid philosophical and theoretical debate, I will take the spirit of the Bill and try to tag it to real content.
Q
Sanjay Bhandari: I do not think it was adequate because we still see stuff coming through. They have the greatest power to stop it. One thing we are interested in is improving transparency reporting. I have asked them a number of times, “Someone does not become a troll overnight, in the same way that someone does not become a heroin addict overnight, or commit an extremist act of terrorism overnight. There is a pathway where people start off, and you have that data. Can I have it?” I have lost count of the number of times that I have asked for that data. Now I want Ofcom to ask them for it.
Q
Lynn Perry: We do. Barnardo’s really welcomes the Bill. We think it is a unique and once-in-a-generation opportunity to achieve some really long-term changes to protect children from a range of online harms. There are some areas in which the Bill could go further, which we have talked about today. The opportunity that we see here is to make the UK the safest place in the world for children to be online. There are some very important provisions that we welcome, not least on age verification, the ability to raise issues through super-complaints, which you have asked me about, and the accountability in various places throughout the Bill.
Q
Sanjay Bhandari: As I said earlier, there are no absolute rights. There is no absolute right to freedom of speech— I cannot shout “Fire!” here—and there is no absolute right to privacy; I cannot use my anonymity as a cloak for criminality. It is question of drawing an appropriate balance. In my opinion, the Bill draws an appropriate balance between the right to freedom of speech and the right to privacy. I believe in both, but in the same way that I believe in motherhood and apple pie: of course I believe in them. It is really about the balancing exercise, and I think this is a sensible, pragmatic balancing exercise.
Ms Perry, I am very pleased that we were finally able to hear from you. Thank you very much indeed—you have been very patient. Thank you very much, Mr Bhandari. If either of you, as a result of what you have heard and been asked today, have any further thoughts that you wish to submit, please do so.
Examination of Witnesses
Eva Hartshorn-Sanders and Poppy Wood gave evidence.
Q
Eva Hartshorn-Sanders: Our “Hidden Hate” report was on DMs—direct messages—that were shared by the participants in the study. One in 15 of those broke the terms and conditions that Instagram had set out related to misogynist abuse—sexual abuse. That was in the wake of the World cup, so after Instagram had done a big promotion about how great it was going to be in having policies on these issues going forward. We found that 90% of that content was not acted on when we reported it. This was not even them going out proactively to find the content and not doing anything with it; it was raised for their attention, using their systems.
Q
Eva Hartshorn-Sanders: That will depend on transparency, as Poppy mentioned. How much of that information can be shared? We are doing research at the moment on data that is shared personally, or is publicly available through the different tools that we have. So it is strengthening access to that data.
There is this information asymmetry that happens at the moment, where big tech is able to see patterns of abuse. In some cases, as in the misogyny report, you have situations where a woman might be subject to abuse from one person over and over again. The way that is treated in the EU is that Instagram will go back and look at the last 30 historically to see the pattern of abuse that exists. They are not applying that same type of rigorousness to other jurisdictions. So it is having access to it in the audits that are able to happen. Everyone should be safe online, so this should be a safety-by-design feature that the companies have.
Q
Eva Hartshorn-Sanders: I think it depends on who the researchers are. I personally do not have experience of it, but I cannot speak to that. On transparency, at the moment, the platforms generally choose what they share. They do not necessarily give you the data that you need. You can hear from my accent that I am originally from New Zealand. I know that in the wake of the Christchurch mosque terrorist attack, they were not prepared to provide the independent regulator with data on how many New Zealanders had seen the footage of the livestream, which had gone viral globally. That is inexcusable, really.
Q
Poppy Wood: On the point about access to data, I do not believe that the platforms go as far as they could, or even as far as they say they do. Meta have a tool called CrowdTangle, which they use to provide access to data for certain researchers who are privileged enough to have access. That does not even include comments on posts; it is only the posts themselves. The platforms pull the rug out all the time from under researchers who are investigating things that the platforms do not like. We saw that with Laura Edelson at New York University, who they just cut off—that is one of the most famous cases. I think it is quite egregious of Meta to say that they give lots of access to data.
We know from the revelations of whistleblowers that Meta do their own internal research, and when they do not like the results, they just bury it. They might give certain researchers access to data under certain provisions, but independent researchers who want to investigate a certain emergent harm or a certain problem are not being given the sort of access that they really need to get insights that move the needle. I am afraid that I just do not believe that at all.
The Bill could go much further. A provision on access to data in clause 136 states that Ofcom has two years to issue a report on whether researchers should get access to data. I think we know that researchers should have access to data, so I would, as a bare minimum, shorten the time that Ofcom has to do that report from two years to six months. You could turn that into a question of how to give researchers access to data rather than of whether they should get it. The Digital Services Act—the EU equivalent of the Bill—goes a bit further on access to data than our Bill. One result of that might be that researchers go to the EU to get their data because they can get it sooner.
Improving the Bill’s access to data provisions is a no-brainer. It is a good thing for the Government because we will see more stuff coming out of academia, and it is a good thing for the safety tech sector, because the more research is out there, the more tools can be built to tackle online harms. I certainly call on the Government to think about whether clause 136 could go further.
Q
Poppy Wood: It is not an easy problem to solve, for sure. What everybody is saying is that you do it in a content-neutral way, so that you are not talking about listing specific types of misinformation but about the risks that are built into your system and that need to be mitigated. This is a safety by design question. We have heard a lot about introducing more friction into the system, checking the virality threshold, and being more transparent. If you can get better on transparency, I think you will get better on misinformation.
If there is more of an obligation on the platforms to, first, do a broader risk assessment outside of the content that will be listed as priority content and, secondly, introduce some “harm reduction by design” mechanisms, through friction and stemming virality, that are not specific to certain types of misinformation, but are much more about safety by design features—if we can do that, we are part of the way there. You are not going to solve this problem straightaway, but you should have more friction in the system, be it through a code of practice or a duty somewhere to account for risk and build safer systems. It cannot be a content play; it has to be a systems play.
Thank you. I am sorry, but that brings us to the end of the time allotted to this session. Ladies, if either of you wishes to make a submission in writing in the light of what you have not answered or not been able to answer, please do. Ms Wood, Ms Hartsholm-Sanders, thank you very much indeed for joining us.
Examination of Witnesses
Owen Meredith and Matt Rogerson gave evidence.
Q
Owen Meredith: I do not think that would be allowable under the Bill, because of the distinction between a recognised news publisher publishing what we would all recognise as journalistic content, versus the journalistic content exemption. I think that is why they are treated differently.
Q
Owen Meredith: Yes. I think the issue is how that exemption will work in practice. I think that what the Government have said they are looking at and will bring forward does address the operating in practice.
Q
Owen Meredith: As I alluded to earlier, it is a real challenge to set out this legal definition in a country that believes, rightly, in the freedom of the press as a fourth pillar of democracy. It is a huge challenge to start with, and therefore we have to set out criteria that cover the vast majority of news publishers but do not end up with a backdoor licensing system for the press, which I think we are all keen to avoid. I think it meets that criterion.
On the so-called bad actors seeking to abuse that, I have listened to and read some of the evidence that you have had from others—not extensively, I must say, due to other commitments this week—and I think that it would be very hard for someone to meet all those criteria as set out in order to take advantage of this. I think that, as Matt has said, there will clearly be tests and challenges to that over time. It will rightly be challenged in court or go through the usual judicial process.
Matt Rogerson: It seems to me that the whole Bill will be an iterative process. The internet will not suddenly become safe when the Bill receives Royal Assent, so there will be this process whereby guidance and case law are developed, in terms of what a newspaper is, against the criteria. There are exemptions for news publishers in a whole range of other laws that are perfectly workable. I think that Ofcom is perfectly well equipped to create guidance that enables it to be perfectly workable.
Q
Matt Rogerson: Subject to the guidance developed by Ofcom, which we will be engaged in developing, I do think so. The other thing to bear in mind is that the platforms already have lists of trusted publishers. For example, Google has a list in relation to Google News—I think it has about 65,000 publishers—which it automates to push through Google News as trusted news publishers. Similarly, Facebook has a list of trusted news publishers that it uses as a signal for the Facebook newsfeed. So I do not buy the idea that you can’t automate the use of trusted news sources within those products.
Q
Owen Meredith: If I can speak to the point that directly relates to my members and those I represent, which is “Does it protect press freedom?”, which is perhaps an extension of your question, I would say that it is seeking to. Given the assurances you have given about the detailed amendments that you intend to bring forward—if those are correct, and I am very happy to write to the Committee and comment once we have seen the detail, if it would be helpful to do so—and everything I have heard about what you are intending to do, I believe it will. But I do not believe that the current draft properly and adequately protects press freedom, which is why, I think, you will be bringing forward amendments.
Q
Owen Meredith: Subject to seeing the drafting, but I believe the intention—yes.
Thank you. That is very helpful. Mr Rogerson?
Matt Rogerson: As we know, this is a world first: regulation of the internet, regulation of speech acts on the internet. From a news publisher perspective, I think all the principles are right in terms of what the Government are trying to do. In terms of free speech more broadly, a lot of it will come down to how the platforms implement the Bill in practice. Only time will tell in terms of the guidance that Ofcom develops and how the platforms implement that at vast scale. That is when we will see what impact the Bill actually has in practice.
Q
Matt Rogerson: Yes. With the development of the online platforms to the dominant position they are in today, that will be a big step forward. The only thing I would add is that, as well as this Bill, the other Bill that will make a massive difference when it comes through is the digital markets unit Bill. We need competition to Facebook so that consumers have a choice and so that they can decide which social network they want to be on, not just the one dominant social network that is available to them in this country.
I commend your ingenuity in levering an appeal for more digital competition into this discussion. Thank you.
Q
Tim Fassam: I believe that would be helpful. I think Ofcom is the right organisation to manage the relationship with the platforms, because it is going to be much broader than the topics we are talking about in our session, but we do think the FCA, Action Fraud and potentially the CMA should be able to direct, and be very clear with Ofcom, that action needs to be taken. Ofcom should have the ability to ask for things to be reviewed to see whether they break the rules.
The other area where we think action probably needs to be taken is where firms are under investigation, because the Bill assumes it is clear cut whether something is fraud, a scam, a breach of the regulations or not. In some circumstances, that can take six months or a year to establish through investigation. We believe that if, for example, the FCA feels that something is high risk, it should be able to ask Ofcom to suspend an advert, or a firm from advertising, pending an investigation to assess whether it is a breach of the regulation.
Rocio Concha: I agree that Ofcom is the right regulator, the main regulator, but it needs to work with the other regulators—with the FCA, ASA and CMA—to enforce the Bill effectively. There is another area. Basically, we need to make sure that Ofcom and all the regulators involved have the right resources. When the initial version of the Bill was published, Ofcom got additional resources to enable it to enforce the Bill. But the Bill has increased in scope, because now it includes fraud and fraudulent advertising. We need to make sure that Ofcom has the right resources to enforce the full Bill effectively. That is something that the Government really need to consider.
Martin Lewis: I was going to make exactly that point, but it has just been made brilliantly so I will not waste your time.
Q
I will start by agreeing with the point that Martin Lewis made a minute or two ago—that we cannot trust these companies to work on their own. Mr Lewis, I am not sure whether you have had a chance to go through clause 34, which we inserted into the Bill following your evidence to the Joint Committee last year. It imposes a duty on these companies to take steps and implement systems to
“prevent individuals from encountering content consisting of fraudulent advertisements”.
There is a clear duty to stop them from doing this, rather as you were asking a minute ago when you described the presentation. Does that strong requirement in clause 34, to stop individuals from encountering fraudulent advertisement content, meet the objective that you were asking for last year?
Martin Lewis: Let me start by saying that I am very grateful that you have put it in there and thankful that the Government have listened to our campaign. What I am about to say is not intended as criticism.
It is very difficult to know how this will work in practice. The issue is all about thresholds. How many scam adverts can we stomach? I still have, daily—even from the platform that I sued, never mind the others—tens of reports directly to me of scam adverts with my face on. Even though there is a promise that we will try to mitigate that, the companies are not doing it. We have to have a legitimate understanding that we are not going to have zero scam adverts on these platforms; unless they were to pre-vet, which I do not think they will, the way they operate means that will not happen.
I am not a lawyer but my concern is that the Bill should make it clear, and that any interpretation of the Bill from Ofcom should be clear, about exactly what threshold of scam adverts is acceptable—we know that they are going to happen—and what threshold is not acceptable. I do not have the expertise to answer your question; I have to rely on your expertise to do that. But I ask the Committee to think properly about what the threshold level should be.
What is and is not acceptable? What counts as “doing everything they can”? They are going to get big lawyers involved if you say there must be zero scam adverts—that is not going to happen. How many scam adverts are acceptable and how many are not? I am so sorry to throw that back as a question when I am a witness, but I do not have the expertise to answer. But that is my concern: I am not 100% convinced of the threshold level that you are setting.
Q
Tim Fassam: I think we are positive about the actions that have been taken regarding social media; our concern is that the clause is not applied to search and that it excludes paid-for ads that are also user-generated content—promoted tweets or promoted posts, for example. We would ensure that that applied to all paid-for adverts and that it was consistent between social media and search.
Q
Tim Fassam: You absolutely do, but to a weaker standard than in clause 34.
Q
Tim Fassam: Thank you.
Q
Mr Lewis, as you were named, I think you had better start.
Martin Lewis: Ten per cent. of the global revenue of a major social media or search player is a lot of money—it certainly would hit them in the pocket. I reiterate my previous point: it is all about the threshold at which that comes in and how rigidly Ofcom is enforcing it. There are very few organisations that have the resources, legally, to take on big institutions of state, regulators and Governments. If any does, it is the gigantic tech firms. Absolutely, 10% of global revenue sounds like a suitable wall to prevent them jumping over. That is the aim, because we want those companies to work for people; we don’t want them to do scam adds. We want them to work well and we want them never to be fined because is no reason to fine them.
The proof of the pudding will be in how robust Ofcom feels it can be, off the back of the Bill, taking those companies on. I go back to needing to understand how many scam ads you permit under the duty to prevent scam ads. It clearly is not zero—you are not going to tell me it is zero. So how many are allowed, what are the protocols that come into place and how quickly do they have to take the ads down? Ultimately, I think that is going to be a decision for Ofcom, but it is the level of stringency that you put on Ofcom in order for it to interpret how it takes that decision that is going to decide whether this works or not.
Rocio Concha: I completely agree with Martin. Ofcom needs to have the right resources in order to monitor how the platforms are doing that, and it needs to have the right powers. At the moment, Ofcom can ask for information in a number of areas, including fraud, but not advertising. We need to make sure that Ofcom can ask for that information so that it can monitor what the platforms are doing. We need to make sure that it has the right powers and the right resources to enforce the Bill effectively.
Tim Fassam: You would hope that 10% would certainly be a significant disincentive. Our focus would be on whether companies are contributing to compensating the victims of fraud and scams, and whether they have been brought into the architecture that is utilised to compensate victims of fraud and scams. That would be the right aim in terms of financial consequences for the firms.
Q
Secondly, clauses 140 and 141 contain a procedure for so-called super-complaints, where a body that represents users—it could be Which? or an organisation like it—is able to bring something almost like a class action or group complaint to Ofcom if it thinks a particular social media firm has systemic problems. Will those two clauses address the issue of complaints not being properly handled or, in some cases, not being dealt with at all?
Martin Lewis: Everything helps. I think the super-complaint point is really important. We must remember that many victims of scams are not so good at complaining and, by the nature of the crossover of individuals, there is a huge mental health issue at stake with scams. There is both the impact on people with mental health issues and the impact on people’s mental health of being scammed, which means that they may not be as robust and up for the fight or for complaining. As long as it works and applies to all the different categories that are repeated here, the super-complaint status is a good measure.
We absolutely need proper reporting lines. I urge you, Minister—I am not sure that this is in the Bill—to standardise this so that we can talk about what someone should do when they report: the same imagery, the same button. With that, people will know what to do. The more we can do that, the easier and better the system will be.
Q
“easy to access, easy to use (including by children) and transparent.”
The previous paragraph (b) states that the system must
“provides for appropriate action to be taken by the provider of the service in response to complaints of a relevant kind”.
The Bill is saying that a complaints process must do those two things, because if it does not, Ofcom will be on the company’s back.
Martin Lewis: I absolutely support all of that. I am just pushing for that tiny bit more leadership, whether it is from you or Ofcom, that comes up with a standardised system with standardised imagery and placing, so that everybody knows that on the top left of the advert you have the button that you click to fill in a form to report it. The more we have that cross-platform and cross-search and cross-social media, the easier it will be for people. I am not sure it is a position for the Bill in itself, but Government leadership would work really well on that.
Tim Fassam: They are both welcome—the super-complaint and the new complaints process. We want to ensure that we have a system that looks not just at weight of number of complaints, but at the content. In particular, you may find on the super-complaint point that, for example, the firm that a fraudster is pretending to be is the organisation that has the best grasp of the issue, so do not forget about commercial organisations as well as consumer organisations when thinking about who is appropriate to make super-complaints.
Q
Tim Fassam: Absolutely. We suggested to Meta when we met them about 18 months ago that we could be a clearing house to identify for them whether they need to take something seriously, because our members have analysed it and consider it to represent a real risk.
Last word to Rocio Concha.
Rocio Concha: I completely agree about the super-complaint. We as a consumer organisation have super-complaint powers. As with other regulators, we would like to have it in this context as well. We have done many super-complaints representing consumers in particular areas with the regulators, so I think we need it in this Bill as well.
On reporting, I want to clarify something. At the moment, the Bill does not have a requirement for users to complain and report to platforms in relation to fraudulent advertising. It happens for priority illegal content, but our assessment of the Bill is that it is unclear whether it applies to fraudulent advertising. We probably do not have time to look at this now, but we sent you amendments to where we thought the Bill had weaknesses. We agree with you that users should have an easy and transparent way to report illegal or fraudulent advertising, and they should have an easy way to complain about it. At the moment, it is not clear that the Bill will require that for fraudulent advertising.
Q
Rocio Concha: My comment was in relation not to the super-complaints but to the requirements. We already sent you our comments with suggestions on how you can fix this in the Bill.
Ms Concha and Mr Fassam, thank you very much. Do please write in if you have further comments. Mr Lewis, we are deeply grateful to you. You can now go back to your day job and tell us whether we are going to be worse or better off as a result of the statement today—please don’t answer that now.
Martin Lewis: I am interviewing the Chancellor in 15 minutes.
Q
Frances Haugen: I think that shows a commendable level of chutzpah. Researchers have been trying to get really basic datasets out of Facebook for years. When I talk about a basic dataset, it is things as simple as, “Just show us the top 10,000 links that are distributed in any given week.” When you ask for information like that in a country like the United States, no one’s privacy is violated: every one of those links will have been viewed by hundreds of thousands, if not millions of people. Facebook will not give out even basic data like that, even though hundreds if not thousands of academics have begged for this data.
The idea that they have worked in close co-operation with researchers is a farce. The only way that they are going to give us even the most basic data that we need to keep ourselves safe is if it is mandated in the Bill. We need to not wait two years after the Bill passes—and remember, it does not even say that it will happen; Ofcom might say, “Oh, maybe not.” We need to take a page from the Digital Services Act and say, “On the day that the Bill passes, we get access to data,” or, at worst, “Within three months, we are going to figure out how to do it.” It needs to be not, “Should we do it?” but “How will we do it?”
Q
Frances Haugen: First, I left the company a year ago. Because we have no transparency with these companies, they do not have to publish their algorithms or the consequences of their algorithms, so who knows? Maybe they use astrology now to rank the content. We have no idea. All I know is that Meta definitely still uses signals—did users click on it, did they dwell on it, did they re-share it, or did they put a comment on it? There is no way it is not using those. It is very unlikely that they do not still use engagement in their ranking.
The secondary question is, do they optimise for engagement? Are they trying to maximise it? It is possible that they might interpret that and say, “No, we have multiple things we optimise for,” because that is true. They look at multiple metrics every single time they try to decide whether or not to shift things. But I think it is very likely that they are still trying to optimise for engagement, either as their top metric or as one of their top metrics.
Remember, Meta is not trying to optimise for engagement to keep you there as long as possible; it is optimising for engagement to get you and your friends to produce as much content as possible, because without content production, there can be no content consumption. So that is another thing. They might say, “No, we are optimising for content production, not engagement,” but that is one step off.
Q
Frances Haugen: I have a feeling that there is going to be a period of growing pains after the first time these risk assessments happen. I can almost entirely guarantee you that Facebook will try to give you very little. It will likely be a process of back and forth with the regulator, where you are going to have to have very specific standards for the level of transparency, because Facebook is always going to try to give you the least possible.
One of the things that I am actually quite scared about is that, in things like the Digital Services Act, penalties go up to 10% of global profits. Facebook as a company has something like 35% profit margins. One of the things I fear is that these reports may be so damning— that we have such strong opinions after we see the real, hard consequences of what they are doing—that Facebook might say, “This isn’t worth the risk. We’re just going to give you 10% of our profits.” That is one of the things I worry about: that they may just say, “Okay, now we’re 25% profitable instead of 35% profitable. We’re that ashamed.”
Q
Frances Haugen: Oh, good. That’s wonderful.
We had a case last year where Facebook—it was actually Facebook—failed to provide some information to the CMA in a takeover case, and it paid a £50 million fine rather than provide the information, hence the provision for personal criminal liability for failing to provide information that is now in this Bill.
My final question is a simple one. From your perspective, at the moment, when online tech companies are making product design decisions, what priority do they give to safety versus profit?
Frances Haugen: What I saw when I was at Facebook was that there was a culture that encouraged people to always have the most positive interpretation of things. If things are still the same as when I left—like I said, I do not know; I left last May—what I saw was that people routinely had to weigh little changes in growth versus changes in safety metrics, and unless they were major changes in safety metrics, they would continue to pursue growth. The only problem with a strategy like that is that those little deficits add up to very large harms over time, so we must have mandated transparency. The public have to have access to data, because unless Facebook has to add the public cost of the harm of its products, it is not going to prioritise enough those little incremental harms as they add up.
Ms Haugen, thank you very much indeed for joining us today, and thank you also for the candour with which you have answered your questions. We are very grateful to you indeed.
The Committee will meet again on Tuesday 7 June at 9.25 am for the start of its line-by-line consideration of the Bill. That session will be in Committee Room 14.
Ordered, That further consideration be now adjourned. —(Steve Double.)