(2 years, 6 months ago)
Public Bill CommitteesWe are now sitting in public and the proceedings are being broadcast. Before we begin, I have a few announcements. Hansard colleagues would be grateful if Members could email their speaking notes to hansardnotes@parliament.uk. Please switch electronic devices to silent. Tea, coffee, and other drinks, apart from the water provided, are not allowed during sittings. Date Time Witness Tuesday 24 May Until no later than 10.05 am Ofcom Tuesday 24 May Until no later than 10.50 am Dame Rachel de Souza, Children’s Commissioner for England; Barnado’s; National Society for the Prevention of Cruelty to Children (NSPCC) Tuesday 24 May Until no later than 11.25 am TikTok; Twitter Tuesday 24 May Until no later than 2.45 pm Meta; Microsoft; Google Tuesday 24 May Until no later than 3.30 pm Professor Clare McGlynn, Professor of Law, Durham University; Refuge; End Violence Against Women Tuesday 24 May Until no later than 4.15 pm techUK; Online Safety Tech Industry Association (OSTIA); Crisp Tuesday 24 May Until no later than 5.00 pm Match Group; Bumble; TrustElevate Tuesday 24 May Until no later than 5.30 pm Marie Collins Foundation; Internet Watch Foundation (IWF) Tuesday 24 May Until no later than 6.00 pm Demos; FairVote Thursday 26 May Until no later than 12.15 pm Catch22; Full Fact; Carnegie UK Trust Thursday 26 May Until no later than 1.00 pm Antisemitism Policy Trust; Clean up the Internet; HOPE not hate Thursday 26 May Until no later than 2.25 pm Information Commissioner’s Office Thursday 26 May Until no later than 2.55 pm Kick It Out; The Football Association Thursday 26 May Until no later than 3.25 pm Center for Countering Digital Hate; Reset Thursday 26 May Until no later than 3.55 pm News Media Association; Guardian Media Group Thursday 26 May Until no later than 4.40 pm Personal Investment Management & Financial Advice Association (PIMFA); Which?; Money Saving Expert Thursday 26 May Until no later than 5.05 pm Frances Haugen
Today, we will first consider the programme motion on the amendment paper. We will then consider a motion to enable the reporting of written evidence for publication, and a motion to allow us to deliberate in private about our questions before the oral evidence session. In view of the timetable available, I hope that we can take these matters formally without debate. I first call the Minister to move the programme motion standing in his name, which was discussed on Thursday by the Programming Sub-Committee for this Bill.
Ordered,
That—
(1) the Committee shall (in addition to its first meeting at 9.25 am on Tuesday 24 May) meet—
(a) at 2.00 pm on Tuesday 24 May;
(b) at 11.30 am and 2.00 pm on Thursday 26 May;
(c) at 9.25 am and 2.00 pm on Tuesday 7 June;
(d) at 11.30 am and 2.00 pm on Thursday 9 June;
(e) at 9.25 am and 2.00 pm on Tuesday 14 June;
(f) at 11.30 am and 2.00 pm on Thursday 16 June;
(g) at 9.25 am and 2.00 pm on Tuesday 21 June;
(h) at 11.30 am and 2.00 pm on Thursday 23 June;
(i) at 9.25 am and 2.00 pm on Tuesday 28 June;
(j) at 11.30 am and 2.00 pm on Thursday 30 June;
(2) the Committee shall hear oral evidence in accordance with the following Table:
(3) proceedings on consideration of the Bill in Committee shall be taken in the following order: Clauses 1 to 3; Schedules 1 and 2; Clauses 4 to 32; Schedule 3; Clauses 33 to 38; Schedule 4; Clauses 39 to 52; Schedules 5 to 7; Clauses 53 to 64; Schedule 8; Clauses 65 to 67; Schedule 9; Clauses 68 to 80; Schedule 10; Clauses 81 to 91; Schedule 11; Clauses 92 to 122; Schedule 12; Clauses 123 to 158; Schedule 13; Clauses 159 to 161; Schedule 14; Clauses 162 to 194; new Clauses; new Schedules; remaining proceedings on the Bill;
(4) the proceedings shall (so far as not previously concluded) be brought to a conclusion at 5.00 pm on Thursday 30 June.—(Chris Philp.)
The Committee will therefore proceed to line-by-line consideration of the Bill on Tuesday 7 June at 9.25 am. I call the Minister to move the motion about written evidence.
Resolved,
That, subject to the discretion of the Chair, any written evidence received by the Committee shall be reported to the House for publication.—(Chris Philp.)
Copies of written evidence to the Committee will be made available in the Committee room each day and will be circulated to Members by email. I call the Minister to move the motion about deliberating in private.
Resolved,
That, at this and any subsequent meeting at which oral evidence is to be heard, the Committee shall sit in private until the witnesses are admitted.—(Chris Philp.)
We are now sitting in public again, and the proceedings are being broadcast. Before we start hearing from the witnesses, do any Members wish to make declarations of interest in connection with the Bill?
The witness on Thursday’s sitting, Danny Stone from the Antisemitism Policy Trust, is an informal secretariat in a personal capacity to the all-party parliamentary group on wrestling, which I co-chair.
I refer Members to my entry in the Register of Members’ Financial Interests regarding work I did six months ago for a business called DMA.
We will now hear oral evidence from Kevin Bakhurst, group director of broadcasting and online content at Ofcom, and Richard Wronka, director of Ofcom’s online harms policy. Before calling the first Member to ask a question, I remind all Members that questions should be limited to matters within the scope of the Bill, and we must stick to the timings in the programme motion that the Committee has agreed. For this witness panel, we have until 10.05 am. Could the witnesses please introduce themselves for the record?
Kevin Bakhurst: Good morning. I am Kevin Bakhurst, group director at Ofcom for broadcasting and online content.
Richard Wronka: I am Richard Wronka, a director in Ofcom’s online safety policy team.
Q
Kevin Bakhurst: We should say that we feel the Bill has given us a very good framework to regulate online safety. We have been working closely with the Department for Digital, Culture, Media and Sport to make sure that the Bill gives us a practical, deliverable framework. There is no doubt that it is a challenge. As you rightly say, there will be potentially 25,000 platforms in scope, but we feel that the Bill sets out a series of priorities really clearly in terms of categories.
It is also for us to set out—we will be saying more about this in the next couple of months—how we will approach this, and how we will prioritise certain platforms and types of risk. It is important to say that the only way of achieving online safety is through what the Bill sets out, which is to look at the systems in place at the platforms, and not the individual pieces of content on them, which would be unmanageable.
Q
Richard Wronka: We completely recognise the concerns that have been raised by stakeholders, and we have been speaking to many of them ourselves, so we have first-hand experience. I think my starting point is that the Bill captures those high-risk services, which is a really important feature of it. In particular, responsibilities around the legal content apply across all services in scope. That means that, in practice, when we are regulating, we will take a risk-based approach to whom we choose to engage with, and to where we focus our effort and attention.
We recognise that some of the debate has been about the categorisation process, which is intended to pick up high-risk and high-reach services. We understand the logic behind that. Indeed, I think we would have some concerns about the workability of an approach that was purely risk-based in its categorisation. We need an approach that we can put into operation. Currently, the Bill focuses on the reach of services and their functionality. We would have some concerns about a purely risk-based approach in terms of whether it was something that we could put into practice, given the number of services in scope.
Q
Richard Wronka: At the moment, the category 2B service would have transparency reporting requirements. That would be helpful, because it would be one way that the nature of harmful content on that platform could be brought to our attention, and to the public’s attention. We would also be looking at approaches that we could use to monitor the whole scope of the services, to ensure that we had a good grip of who was growing quickest and where the areas of risk were. Some of that is through engaging with the platforms themselves and a whole range of stakeholders, and some of it is through more advanced data and analytical techniques—“supervision technology”, as it is known in the regulatory jargon.
On the specifics of your question, if a company was growing very quickly, the Bill gives us the ability to look at that company again, to ask it for information to support a categorisation decision, and to recategorise it if that is the right approach and if it has met the thresholds set out by the Secretary of State. One of the thresholds regards the number of users, so if a company has moved over that threshold, we look to act as quickly as possible while running a robust regulatory process.
Q
Kevin Bakhurst: May I answer this? We have some experience of this already in the video-sharing platform regime, which is much more limited in scope, and we are already regulating a number of platforms, ranging from some very big ones such as Twitch, TikTok and Snap, down to some much smaller platforms that have caused us some concerns. We think we have the tools, but part of our approach will also be to focus on high-risk and high-impact content, even if it comes through small platforms. That is what we have already done with the video-sharing platform regime. We have to be agile enough to capture that and to move resources to it. We are doing that already with the video-sharing platform regime, even though we have only been regulating it for less than a year.
Maria Miller has indicated that she would like to ask a question, so if I may, I will bring her in.
Q
I am just trying to get to the intricacies of this, and of what would happen during the time that it would take for you to recategorise. This platform, which is disseminating harm to both children and adults, would be allowed to carry on while the recategorisation process is under way. There is no mechanism in the Bill to stop that from happening.
Richard Wronka: A really important point here is that we will be regulating that platform from the outset for illegal content and, potentially, for how it protects children on its platform, irrespective of the categorisation approach. That is really important. We will be able to take action, and take action quickly, irrespective of how the platform is categorised. Categorisation really determines whether the adult “legal but harmful” provisions apply. That is the bit that really matters in this context.
It is worth reminding ourselves what those provisions mean: they are more a transparency and accountability measure. Those categorised category 1 platforms will need to have clear terms and conditions applied to adult “legal but harmful” content, and they will need to implement those consistently. We would expect the really serious and egregious concerns to be picked up by the “illegal” part of the regime, and the protection-of-children part of the regime. The categorisation process may go on. It may take a little time, but we will have tools to act in those situations.
Q
Kevin Bakhurst: We do have some experience across the various sectors that we regulate, but being directed by the Secretary of State does not happen very often. Specifically on the Bill, our strong feeling is that we think it entirely appropriate, and that the Secretary of State should be able to direct us on matters of national security and terrorist content. However, we have some concerns about the wider direction powers of the Secretary of State, and particularly the grounds on which the Secretary of State can direct public policy, and we have expressed those concerns previously.
We feel it is important that the independence of a regulator can be seen to be there and is there in practice. Legally, we feel it important that there is accountability. We have some experience of being taken to judicial review, and there must be accountability for the codes of practice that we put in place. We must be able to show why and how we have created those codes of practice, so that we can be accountable and there is absolute clarity between regulator and Government.
Q
Kevin Bakhurst: Richard has been leading this process, so he can give more detail on it, but suffice to say, we have been engaging closely with DCMS over the last year or so, and we appreciate the fact that it has taken on board a number of our concerns. What we felt we needed from the Bill was clarity as far as possible, and a balance between clarity and flexibility for this regime, which is a very fast-moving field. We feel, by and large, that the Bill has achieved that.
We still have concerns about one or two areas, to pick up on your question. We feel it is really important—hopefully this is something the Committee can contribute to—that the definition of “illegal content” is really clear for platforms, and particularly the area of intent of illegality, which at the moment might be quite tricky for the platforms to pick up on.
Richard Wronka: I completely agree with Kevin that the Bill as it stands gives us a good framework. I think the pre-legislative scrutiny process has been really helpful in getting us there, and I point out that it is already quite a broad and complex regime. We welcome the introduction of issues such as fraudulent advertising and the regulation of commercial pornographic providers, but I think there is a point about ensuring that the Bill does not expand too much further, because that might raise some practical and operational issues for us.
I completely agree with Kevin that clarity in the Bill regarding illegal content and what constitutes that is really important. An additional area that requires clarity is around some of the complex definitions in the Bill, such as journalistic content and democratically important content. Those are inherently tricky issues, but any extra clarity that Parliament can provide in those areas would be welcome.
Q
Richard Wronka: I would start by saying that this is a fluid area. We have had a number of conversations with the Law Commission in particular and with other stakeholders, which has been really helpful. We recognise that the Bill includes four new offences, so there is already some fluidity in this space. We are aware that there are other Law Commission proposals that the Government are considering. Incitement to self-harm and flashing imagery that might trigger epilepsy are a couple of issues that come to mind there. Ultimately, where the criminal law sits is a matter for Parliament. We are a regulator: our role here is to make sure that the criminal law is reflected in the regulatory regime properly, rather than to determine or offer a view on where the criminal law should sit. Linking back to our point just a minute ago, we think it is really important that there is as much clarity as possible about how platforms can take some of those potentially quite tricky decisions about whether content meets the criminal threshold.
Q
Kevin Bakhurst: One area that is very important and which is in the Bill and one of our responsibilities is to make sure there is a sufficiently robust and reactive complaints process from the platforms—one that people feel they can complain to and be heard—and an appeals process. We feel that that is in the Bill. We already receive complaints at Ofcom from people who have issues about platforms and who have gone to the platforms but do not feel their complaints have been properly dealt with or recognised. That is within the video-sharing platform regime. Those individual complaints, although we are not going to be very specific in looking at individual pieces of material per se, are very useful to alert us where there are issues around particular types of offence or harm that the platforms are not seen to be dealing with properly. It will be a really important part of the regime to make sure that platforms provide a complaints process that is easy to navigate and that people can use quite quickly and accessibly.
Richard Wronka: An additional point I would make, building on that, is that this is a really complex ecosystem. We understand that and have spent a lot of the last two or three years trying to get to grips with that complex ecosystem and building relationships with other participants in the ecosystem. It brings in law enforcement, other regulators, and organisations that support victims of crime or online abuse. We will need to find effective ways to work with those organisations. Ultimately, we are a regulator, so there is a limit to what we can do. It is important that those other organisations are able to operate effectively, but that is perhaps slightly outside our role.
Q
Richard Wronka: I think our starting point here is that we think transparency is a really important principle within the regime—a fundamental principle. There are specific provisions in the Bill that speak to that, but more generally we are looking for this regime to usher in a new era of transparency across the tech sector, so that users and other participants in this process can be clearer about what platforms are doing at the moment, how effective that is and what more might be done in the future. That is something that will be a guiding principle for us as we pick up regulation.
Specifically, the Bill provides for transparency reports. Not all services in scope will need to provide transparency reports, but category 1 and 2 services will be required to produce annual transparency reports. We think that is really important. At the moment, risk assessments are not intended to be published—that is not provided for in the Bill—but the transparency reports will show the effectiveness of the systems and processes that those platforms have put in place.
Q
Richard Wronka: I think what is important for us as a regulator is that we are able to access those risk assessments; and for the biggest services, the category 1 services, we would be expecting to do that routinely through a supervisory approach. We might even do that proactively, or where services have come to us for dialogue around those—
Q
Richard Wronka: Some services may wish to publish the risk assessments. There is nothing in the Bill or in our regulated approach that would prevent that. At the moment, I do not see a requirement in the Bill to do that. Some services may have concerns about the level of confidential information in there. The important point for us is that we have access to those risk assessments.
Kevin Bakhurst: Picking up on the risk assessments, it is a tricky question because we would expect those assessments to be very comprehensive and to deal with issues such as how algorithms function, and so on. There is a balance between transparency, which, as Richard says, we will drive across the regime—to address information that can harm, or people who are trying to behave badly online or to game the system—and what the regulator needs in practical terms. I am sure the platforms will be able to talk to you more about that.
Q
There is also a question of timing. The reports suggested that the new hub and jobs will come into play in 2025. I am sure that everyone here wants to see the Bill taking effect sooner. Ofcom will need to do a lot of reviews and reporting in the first year after the Bill receives Royal Assent. How will that be possible if people are not in post until 2025?
Kevin Bakhurst: They are both big questions. I will take the first part and maybe Richard can take the second one about the timing. On the resourcing, it is important to say publicly that we feel strongly that, very unusually, we have had funding from Government to prepare for this regime. I know how unusual that is; I was at a meeting with the European regulators last week, and we are almost unique in that we have had funding and in the level of funding that we have had.
The funding has meant that we are already well advanced in our preparations. We have a team of around 150 people working on online safety across the organisation. A number are in Manchester, but some are in London or in our other offices around the UK. It is important to say that that funding has helped us to get off to a really strong start in recruiting people across the piece—not just policy people. Importantly, we have set up a new digital function within Ofcom and recruited a new chief technology officer, who came from Amazon Alexa, to head up that function.
The funding has allowed us to really push hard into this space, which is not easy, and to recruit some of the skills we feel we need to deliver this regime as effectively and rapidly as possible. I know that resourcing is not a matter within the Bill; it is a separate Treasury matter. Going forward though, we feel that, in the plans, we have sufficient resourcing to deliver what we are being asked to deliver. The team will probably double in size by the time we actually go live with the regime. It is a significant number of people.
Some significant new duties have been added in, such as fraudulent advertising, which we need to think carefully about. That is an important priority for us. It requires a different skillset. It was not in the original funding plan. If there are significant changes to the Bill, it is important that we remain alive to having the right people and the right number of people in place while trying to deliver with maximum efficiency. Do you want to talk about timing, Richard?
Richard Wronka: All I would add to that, Kevin, is that we are looking to front-load our recruitment so that we are ready to deliver on the Bill’s requirements as quickly as possible once it receives Royal Assent and our powers commence. That is the driving motivation for us. In many cases, that means recruiting people right now, in addition to the people we have already recruited to help with this.
Clearly there is a bit of a gating process for the Bill, so we will need a settled legislative framework and settled priority areas before we can get on with the consultation process. We will look to run that consultation process as swiftly as possible once we have those powers in place. We know that some stakeholders are very keen to see the Bill in place and others are less enthusiastic, so we need to run a robust process that will stand the test of time.
The Bill itself points us towards a phased process. We think that illegal content, thanks to the introduction of priority illegal content in the Bill, with those priority areas, is the area on which we can make the quickest progress as soon as the Bill achieved Royal Assent.
Thank you. I intend to bring in the Minister at about 10 o’clock. Kirsty Blackman, Kim Leadbeater and Dean Russell have indicated that they wish to ask questions, so let us try to keep to time.
Q
Kevin Bakhurst: There is a particular area on reasons of public policy for the Secretary of State to direct us on codes that we have some concern about. It is more on practicality than independence, but clearly for the platforms, and we have had a lot of discussions with them, the independence of a regulator—that is, a regulatory regime that is essentially about content—is absolutely critical, and it is a priority for us to show that we are independent.
Q
Richard Wronka: Yes, we fully anticipate that gaming services, and particularly the messaging functionality that is often integrated into those services, will be captured within the scope of the regime. We do think that the Bill, on the whole, gives us the right tools to regulate those services.
Q
Kevin Bakhurst: Overall, we feel that it is. By and large, the balance between certainty and flexibility in the Bill is probably about right and will allow some flexibility in future, but it is very hard to predict what other harms may emerge. We will remain as flexible as possible.
Richard Wronka: There are some really important updating tools in the Bill. The ability for the Secretary of State to introduce new priority harms or offences—with the approval of Parliament, of course—is really important.
Q
Richard Wronka: I will cover the codes first. You are absolutely right that the Bill requires Ofcom to publish codes of practice, particularly on CSEA and on terror, as well as on fraudulent advertising and other areas. We are doing the work right now so that we are ready to progress with that process as soon as we get powers and duties, because it is really important that we are ready to move as quickly as possible. We will set out further detail on exactly how we plan to do that in a roadmap document that we are looking to publish before the summer break, so that will provide some of the detail.
A really important point here is that the Bill quite rightly covers a wide set of harms. We are mindful of the fact that the temptation of having a code that covers every single harm could be counterproductive and confusing for platforms, even for those that want to comply and do the right thing. One of the balancing acts for us as we produce that code framework will be to get the right coverage for all the issues that everyone is rightly concerned about, but doing that in a way that is streamlined and efficient, so that services can apply the provisions of those codes.
Richard Wronka: Shall I pick up on the second bit very quickly? I think you are right; this is one of our central concerns about the definitions. As far as possible, this should be a matter for Parliament. It is really important that to know Parliament has a view on this. Ultimately, the regulator will take a view based on what Parliament says. We have some experience in this area, but as Richard said, we recognise the challenge—it is extremely complex. We can see the policy intent of doing it, quite rightly, and the importance of enshrining freedom of expression as far as possible, but Parliament can help to add clarity and, as you rightly say, be aware of some of the potential loopholes. At the moment, someone could describe themselves as a citizen journalist; where does that leave us? I am not quite sure. Parliament could help to clarify that, and we would be grateful.
Q
Richard Wronka: This picks up the point we discussed earlier, which is that I understand that the Government are considering proposals from the Law Commission to criminalise the sending of those kinds of images. It would not be covered by the illegal content duties as things stand, but if the Government conclude that it is right to criminalise those issues, it would automatically be picked up by the Bill.
Even so, the regime is not, on the whole, going to be able to pick up every instance of harm. It is about making sure that platforms have the right systems and processes. Where there is clear harm to individuals, we would expect those processes to be robust. We know there is work going on in the industry on that particular issue to try and drive forward those processes.
Q
Kevin Bakhurst: This is a really important point, which Richard just tried to make. The Bill gives us a great range of tools to try and prevent harm as far as possible; I just think we need to get expectations right here. Unfortunately, this Bill will not result in no harm of any type, just because of the nature of the internet and the task that we face. We are ambitious about driving constant improvement and stopping and addressing the main harms, but it is not going to stop any particular harm. We will absolutely focus on the ones that have a significant impact, but unfortunately that is the nature of the web.
Q
“psychological harm amounting to serious distress”?
Therefore, sending somebody a flashing image with the intention of inducing an epileptic fit would be likely caught under this new harmful communications offence in clause 150, even before a separate future offence that may be introduced.
Richard Wronka: I think we can certainly understand the argument. I think it is important that the Bill is as clear as possible. Ultimately, it is for the courts to decide whether that offence would pick up these kinds of issues that we are talking about around flashing imagery.
Q
You mentioned that you met recently with European regulators. Briefly, because we are short of time, were there any particular messages, lessons or insights you picked up in those meetings that might be of interest to the Committee?
Kevin Bakhurst: Yes, there were a number, and liaising with European regulators and other global regulators in this space is a really important strand of our work. It often said that this regime is a first globally. I think that is true. This is the most comprehensive regime, and it is therefore potentially quite challenging for the regulator. That is widely recognised.
The second thing I would say is that there was absolute recognition of how advanced we are in terms of the recruitment of teams, which I touched on before, because we have had the funding available to do it. There are many countries around Europe that have recruited between zero and 10 and are imminently going to take on some of these responsibilities under the Digital Services Act, so I think they are quite jealous.
The last thing is that we see continued collaboration with other regulators around the world as a really important strand, and we welcome the information-sharing powers that are in the Bill. There are some parallels, and we want to take similar approaches on areas such as transparency, where we can collaborate and work together. I think it is important—
Order. I am afraid we have come to the end of the allotted time for questions. On behalf of the Committee, I thank our witnesses for their evidence.
Examination of Witnesses
Dame Rachel de Souza, Lynn Perry MBE and Andy Burrows gave evidence.
We will now hear from the Children’s Commissioner, Dame Rachel de Souza; Lynn Perry, chief executive officer of Barnardo’s, who will be appearing via Zoom; and Andy Burrows, head of child safety at the National Society for the Prevention of Cruelty to Children. Could the new witnesses take their places, please?
We have until 10.50 am for this panel. Could the witnesses please introduce themselves for the record? We will take the witnesses in the room first.
Andy Burrows: I am Andy Burrows, the head of online safety policy at the NSPCC.
Dame Rachel de Souza: I am Rachel de Souza, Children’s Commissioner for England.
And on the screen—[Interruption.] Uh-oh, it has frozen. We will have to come back to that. We will take evidence from the witnesses in the room until we have sorted out the problem with the screen.
Q
Andy Burrows: Thank you for the question. We think that more could be built into the Bill to ensure that children’s needs and voices can be fed into the regime.
One of the things that the NSPCC would particularly like to see is provision for statutory user advocacy arrangements, drawing on the examples that we see in multiple other regulated sectors, where we have a model by which the levy on the firms that will cover the costs of the direct regulation also provides for funded user advocacy arrangements that can serve as a source of expertise, setting out children’s needs and experiences.
A comparison here would be the role that Citizens Advice plays in the energy and postal markets as the user voice and champion. We think that would be really important in bolstering the regulatory settlement. That can also help to provide an early warning function—particularly in a sector that is characterised by very rapid technological and market change—to identify new and emerging harms, and bolster and support the regulator in that activity. That, for us, feels like a crucial part of this jigsaw.
Given the very welcome systemic approach of the regime, that early warning function is particularly important, because there is the potential that if harms cannot be identified quickly, we will see a lag where whole regulatory cycles are missed. User advocacy can help to plug that gap, meaning that harms are identified at an earlier stage, and then the positive design of the process, with the risk profiles and company risk assessments, means that those harms can be built into that particular cycle.
Dame Rachel de Souza: I was very pleased when the Government asked me, when I came into the role, to look at what more could be done to keep children safe online and to make sure that their voices went right through the passage of the Bill. I am committed to doing that. Obviously, as Children’s Commissioner, my role is to elevate children’s voices. I was really pleased to convene a large number of charities, internet safety organisations and violence against women and girls experts in a joint briefing to MPs to try to get children’s voices over.
I worry that the Bill does not do enough to respond to individual cases of abuse and that it needs to do more to understand issues and concerns directly from children. Children should not have to exhaust the platforms’ ineffective complaints routes, which can take days, weeks or even months. I have just conducted a survey of 2,000 children and asked them about their experiences in the past month. Of those 2,000 children, 50% had seen harmful content and 40% had tried to get content about themselves removed and had not succeeded. For me, there is something really important about listening to children and talking their complaints into account. I know you have a busy day, but that is the key point that I want to get across.
Lynn Perry is back on the screen—welcome. Would you like to introduce yourself for the record and then answer the question? [Interruption.] Oh, she has gone again. Apparently the problem is at Lynn’s end, so we will just have to live with it; there is nothing we can do on this side.
Q
Andy Burrows: The systemic regime is important. That will help to ensure that the regime can be future-proofed; clearly, it is important that we are not introducing a set of proposals and then casting them in aspic. But there are ways that the Bill could be more strongly future-proofed, and that links to ensuring that the regime can effectively map on to the dynamics of the child sexual abuse problem in particular.
Let me give a couple of examples of where we think the Bill could be bolstered. One is around placing a duty on companies to consider the cross-platform nature of harm when performing their risk assessment functions, and having a broad, overarching duty to ask companies to work together to tackle the child sexual abuse threat. That is very important in terms of the current dynamics of the problem. We see, for example, very well-established grooming pathways, where abusers will look to exploit the design features of open social networks, such as on Instagram or Snapchat, before moving children and abuse on to perhaps live-streaming sites or encrypted messaging sites.
The cross-platform nature of the threat is only going to intensify in the years ahead as we start to look towards the metaverse, for example. It is clear that the metaverse will be built on the basis of being cross-platform and interdependent in nature. We can also see the potential for unintended consequences from other regulatory regimes. For example, the Digital Markets Act recently passed by the EU has provisions for interoperability. That effectively means that if I wanted to send you a message on platform A, you could receive it on platform B. There is a potential unintended consequence there that needs to be mitigated; we need to ensure that there is a responsibility to address the harm potential that could come from more interoperable services.
This is a significant area where the Bill really can be bolstered to address the current dynamics of the problem and ensure that legislation is as effective as it possibly can be. Looking to the medium to long term, it is crucial to ensure that we have arrangements that are commensurate to the changing nature of technology and the threats that will emerge from that.
Dame Rachel de Souza: A simple answer from me: of course we cannot future-proof it completely, because of the changing nature of online harms and technology. I talked to a large number of 16 to 21-year-olds about what they wished their parents had known about technology and what they had needed to keep them safe, and they listed a range of things. No. 1 was age assurance—they absolutely wanted good age assurance.
However, the list of harms and things they were coming across—cyber-flashing and all this—is very much set in time. It is really important that we deal with those things, but they are going to evolve and change. That is why we have to build in really good cross-platform work, which we have been talking about. We need these tech companies to work together to be able to stay live to the issues. We also need to make sure that we build in proper advocacy and listen to children and deal with the issues that come up, and that the Bill is flexible enough to be able to grow in that way. Any list is going to get timed out. We need to recognise that these harms are there and that they will change.
I will bring in Kim Leadbeater and then Maria Miller and Kirsty Blackman, but I will definitely bring in the Minister at 10.45 am.
Q
Dame Rachel de Souza: I have argued hard to get pornographic sites brought into the Bill. That is something very positive about the Bill, and I was really pleased to see that. Why? I have surveyed more than half a million children in my Big Ask survey and spoken recently to 2,000 children specifically about this issue. They are seeing pornography, mainly on social media sites—Twitter and other sites. We know the negative effects of that, and it is a major concern.
I am pleased to see that age assurance is in the Bill. We need to challenge the social media companies—I pull them together and meet them every six months—on getting this stuff off their sites and making sure that under-age children are not on their sites seeing some of these things. You cannot go hard enough in challenging the social media companies to get pornography off their sites and away from children.
Andy Burrows: Just to add to that, I would absolutely echo that we are delighted that part 5 of the Bill, with measures around commercial pornography, has been introduced. One of our outstanding areas of concern, which applies to pornography but also more broadly, is around clause 26, the children’s access assessment, where the child safety duties will apply not to all services but to services where there is a significant number of child users or children comprise a significant part of the user base. That would seem to open the door to some small and also problematic services being out of scope. We have expressed concerns previously about whether OnlyFans, for example, which is a very significant problem as a user-generated site with adult content, could be out of scope. Those are concerns that I know the Digital, Culture, Media and Sport Committee has recognised as well. We would very much like to see clause 26 removed from the Bill, which would ensure that we have a really comprehensive package in this legislation that tackles both commercial pornography and user-generated material.
I think Lynn Perry is back. Are you with us, Lynn? [Interruption.] No—okay. We will move on to Maria Miller.
Q
Dame Rachel de Souza: I absolutely think that we need to look at independent advocacy and go further. I do not think the Bill does enough to respond to individual cases of abuse and to understand issues and concerns directly from children. Children should not have to exhaust platforms’ ineffective complaints routes. It can take days, weeks, months. Even a few minutes or hours of a nude image being shared online can be enormously traumatising for children.
That should inform Ofcom’s policies and regulation. As we know, the risks and harms of the online world are changing constantly. It serves a useful purpose as an early warning mechanism within online safety regulation. I would like to see independent advocacy that allows a proper representation service for children. We need to hear from children directly, and I would like to see the Bill go further on this.
Q
Dame Rachel de Souza: I think we need to make capacity. There is some—the NSPCC has its Childline and, as Children’s Commissioner, I have my own advocacy service for children in care. I think this should function in that way, with direct access. So I think that we can create it.
Andy Burrows: May I come in briefly? Our proposals for user advocacy reflect the clear “polluter pays” principle that we think should apply here, to help build and scale up that capacity, but the levy that is covering the direct cost of regulation should also provide really effective user advocacy. That is really important not only to help to give victims what they need in frontline services, but in ensuring that there is a strong counterbalance to some of the largest companies in the world for our sector, which has clear ambition but self-evident constraints.
Dame Rachel de Souza: One of the concerns that has come to me from children—I am talking about hundreds of thousands of children—over the past year is that there is not strong enough advocacy for them and that their complaints are not being met. Girls in particular, following the Everyone’s Invited concerns, have tried so hard to get images down. There is this almost medieval bait-out practice of girls’ images being shared right across platforms. It is horrendous, and the tech firms are not acting quickly enough to get those down. We need proper advocacy and support for children, and I think that they would expect that of us in this groundbreaking Bill.
Q
Dame Rachel de Souza: Good question. I applaud the Bill for what it does cover. We are looking at a Bill that, for the first time, is going to start protecting children’s rights online, so I am really pleased to see that. We have looked a bit at gaming in the past. In terms of harms, obviously the Bill does not cover gaming in full, but it does cover the safety aspects of children’s experience.
It is always good for us to be looking further. Gaming, we know, has some extremely harmful and individualistic issues with it, particularly around money and the profile of potential grooming and safety. In terms of communications, one of the reasons that I am so concerned about encryption and communications online is that it happens through gaming. We need to make sure that those elements are really firm.
Andy Burrows: It is vitally important that the gaming sector is in scope. We know that there are high-risk gaming sites—for example, Twitch—and gaming-adjacent services such as Discord. To go back to my earlier point about the need for cross-platform provisions to apply here, in gaming we can see grooming pathways that can take on a different character from those on social networks, for example, where we might see abuse pathways where that grooming is taking place at the same time, rather than sequentially from a gaming streaming service, say, to a gaming-adjacent platform such as Discord. I think it is very important that a regulator is equipped to understand the dynamics of the harms and how they will perhaps apply differently on gaming services. That is a very strong and important argument for use advocacy.
I would say a couple of things on oral communications. One-to-one oral communication are excluded from the Bill’s scope—legitimately—but we should recognise that there is a grooming risk there, particularly when that communication is embedded in a platform of wider functionality. There is an argument for a platform to consider all aspects of its functionality within the risk assessment process. Proactive scanning is a different issue.
There is a broader challenge for the Bill, and this takes us back to the fundamental objectives and the very welcome design based around systemic risk identification and mitigation. We know that right now, in respect of oral communications and livestream communications, the industry response is not as developed in terms of detecting and disrupting harm as it is for, say, text-based chat. In keeping with the risk assessment process, it should be clear that if platforms want to offer that functionality, they should have to demonstrate through the risk assessment process that they have high-quality, effective arrangements in place to detect and disrupt harm, and that should be the price of admission. If companies cannot demonstrate that, they should not be offering their services, because there is a high risk to children.
Q
Andy Burrows: I think that aspect is certainly worthy of consideration, because the key objective is that platforms should be incentivised to deliver safety by design initiatives. One area in the Bill that we would like to be amended is the user empowerment mechanism. That gives adults the ability to screen out anonymous accounts, for example, but those provisions do not apply to children. Some of those design features that introduce friction to the user experience are really important to help children, and indeed parents, have greater ownership of their experience.
Q
Andy Burrows: Child abuse breadcrumbing is a major area of concern for us. The term captures a range of techniques whereby abusers are able to use social networks to facilitate the discovery and the dissemination of child sexual abuse. The activity does not meet the criminal threshold in and of itself, but it effectively enables abusers to use online services as a shop window to advertise their sexual interest in children.
I will give a couple of fairly chilling examples of what I mean by that. There is a phenomenon called “tribute sites”. Abusers open social media accounts in the guise of well-known survivors of child sexual abuse. To all of us in this room, that would look perfectly innocuous, but if you are an abuser, the purpose of those accounts is very clear. In the first quarter of last year, those types of accounts received 6 million interactions.
Another example is Facebook groups. We have seen evidence of Facebook refusing to take down groups that have a common interest in, for example, children celebrating their 8th, 9th and 10th birthdays. That is barely disguised at all; we can all see what the purpose is. Indeed, Facebook’s algorithms can see the purpose there, because research has shown that, within a couple of hours of use of the service, the algorithms identify the common characteristic of interest, which is child sexual abuse, and then start recommending accounts in multiple other languages.
We are talking about a significant way in which abusers are able to organise abuse and migrate it to encrypted chat platforms, to the dark web, and to offender fora, where it is, by definition, much harder to catch that activity, which happens after harm has occurred—after child abuse images have been circulated. We really want breadcrumbing to be brought unambiguously into the scope of the Bill. That would close off tens of millions of interactions with accounts that go on to enable abusers to discover and disseminate material and to form offender networks.
We have had some good, constructive relationships with the Home Office in recent weeks. I know that the Home Office is keen to explore how this area can be addressed, and it is vital that it is addressed. If we are going to see the Bill deliver the objective of securing a really effective upstream response, which I think is the clear legislative ambition, this is an area where we really need to see the Bill be amended.
Q
Andy Burrows: Those provisions should apply broadly, but it is a problem that we see particularly on those large sites because of the scale and the potential for algorithmic amplification.
Q
Dame Rachel de Souza: Baroness Kidron has done some fantastic work on this, and I really support her work. I want to tell you why. I am a former headteacher—I worked for 30 years in schools as a teacher and headteacher. Only in the last five or six years did I start seeing suicides of children and teenagers; I did not see them before. In the year just before I came to be Children’s Commissioner, there was a case of a year 11 girl from a vulnerable family who had a relationship with a boy, and it went all over the social media sites. She looked up self-harm material, went out to the woods and killed herself. She left a note that basically said, “So there. Look what you’ve done.”
It was just horrendous, having to pick up the family and the community of children around her, and seeing the long-term effects of it on her siblings. We did not see things like that before. I am fully supportive of Baroness Kidron and 5Rights campaigning on this issue. It is shocking to read about the enormous waiting and wrangling that parents must go through just to get their children’s information. It is absolutely shocking. I think that is enough from me.
Andy Burrows: I absolutely agree. One of the things we see at the NSPCC is the impact on parents and families in these situations. I think of Ian Russell, whose daughter Molly took her own life, and the extraordinarily protracted process it has taken to get companies to hand over her information. I think of the anguish and heartbreak that comes with this process. The Bill is a fantastic mechanism to be able to redress the balance in terms of children and families, and we would strongly support the amendments around giving parents access to that data, to ensure that this is not the protracted process that it currently all too often is.
Just quickly, do coroners have sufficient powers? Should they have more powers to access digital data after the death of a child?
Andy Burrows: We can see what a protracted process it has been. There have been improvements to the process. It is currently a very lengthy process because of the mutual legal assistance treaty arrangements—MLAT, as they are known—by which injunctions have to be sought to get data from US companies. It has taken determination from some coroners to pursue cases, very often going up against challenges. It is an area where we think the arrangements could certainly be streamlined and simplified. The balance here should shift toward giving parents and families access to the data, so that the process can be gone through quickly and everything can be done to ease the heartbreak for families having to go through those incredibly traumatic situations.
Q
Dame Rachel de Souza: There is no silver bullet. This is now a huge societal issue and I think that some of the things that I would want to say would be about ensuring that we have in our educational arsenal, if you like, a curriculum that has a really strong digital media literacy element. To that end, the Secretary of State for Education has just asked me to review how online harms and digital literacy are taught in schools—reviewing not the curriculum, but how good the teaching is and what children think about how the subject has been taught, and obviously what parents think, too.
I would absolutely like to see the tech companies putting some significant funding into supporting education of this kind; it is exactly the kind of thing that they should be working together to provide. So we need to look at this issue from many aspects, not least education.
Obviously, in a dream world I would like really good and strong digital media literacy in the Bill, but actually it is all our responsibility. I know from my conversations with Nadhim Zahawi that he is very keen that this subject is taught through the national curriculum, and very strongly.
Q
Dame Rachel de Souza: It is a massive concern to parents. Parents talk to me all the time about their worries: “Do we know enough?” They have that anxiety, especially as their children turn nine or 10; they are thinking, “I don’t even know what this world out there is.” I think that our conversations with 16 to 21-year-olds were really reassuring, and we have produced a pamphlet for parents. It has had a massive number of downloads, because parents absolutely want to be educated in this subject.
What did young people tell us? They told us, “Use the age controls; talk to us about how much time we are spending online; keep communication open; and talk to us.” Talk to children when they’re young, particularly boys, who are likely to be shown pornography for the first time, even if there are parental controls, around the age of nine or 10. So have age-appropriate conversations. There was some very good advice about online experiences, such as, “Don’t worry; you’re not an expert but you can talk to us.” I mean, I did not grow up with the internet, but I managed parenting relatively well—my son is 27 now. I think this is a constant concern for parents.
I do think that the tech companies could be doing so much more to assist parents in digital media literacy, and in supporting them in how to keep their child safe. We are doing it as the Office of the Children’s Commissioner. I know that we are all trying to do it, but we want to see everyone step up on this, particularly the tech companies, to support parents on this issue.
Q
Could you outline for the Committee the areas where you think the Bill, as currently drafted, contains the most important provisions to protect children?
Dame Rachel de Souza: I was really glad to see, in the rewrite of the Online Safety Bill, a specific reference to the role of age assurance to prevent children from accessing harmful content. That has come across strongly from children and young people, so I was very pleased to see that. It is not a silver bullet, but for too long children have been using entirely inappropriate services. The No. 1 recommendation from the 16 to 21-year-olds, when asked what they wish their parents had known and what we should do, was age assurance, if you are trying to protect a younger sibling or are looking at children, so I was pleased to see that. Companies cannot hope to protect children if they do not know who the children are on their platforms, so I was extremely pleased to see that.
Q
Dame Rachel de Souza: Absolutely. I have called together the tech companies. I have met the porn companies, and they reassured me that as long as they were all brought into the scope of this Bill, they would be quite happy as this is obviously a good thing. I brought the tech companies together to challenge them on their use of age assurance. With their artificial intelligence and technology, they know the age of children online, so they need to get those children offline. This Bill is a really good step in that direction; it will hold them to account and ensure they get children offline. That was a critically important one for me.
I was also pleased to see the holding to account of companies, which is very important. On full coverage of pornography, I was pleased to see the offence of cyber-flashing in the Bill. Again, it is particularly about age assurance.
What I would say is that nudge is not working, is it? We need this in the Bill now, and we need to get it there. In my bit of work with those 2,000 young people, we asked what they had seen in the last month, and 40% of them have not had bad images taken down. Those aspects of the Bill are key.
Andy Burrows: This is a landmark Bill, so we thank you and the Government for introducing it. We should not lose sight of the fact that, although this Bill is doing many things, first and foremost it will become a crucial part of the child protection system for decades to come, so it is a hugely important and welcome intervention in that respect.
What is so important about this Bill is that it adopts a systemic approach. It places clear duties on platforms to go through the process of identifying the reasonably foreseeable harms and requiring that reasonable steps be taken to mitigate them. That is hugely important from the point of view of ensuring that this legislation is future-proofed. I know that many companies have argued for a prescriptive checklist, and then it is job done—a simple compliance job—but a systemic approach is hugely important because it is the basis upon which companies have very clear obligations. Our engagement is very much about saying, “How can we make sure this Bill is the best it can possibly be?” But that is on the bedrock of that systemic approach, which is fundamental if we are to see a culture shift in these companies and an emphasis on safety by design—designing out problems that do not have to happen.
I have engaged with companies where child safety considerations are just not there. One company told me that grooming data is a bad headline today and tomorrow’s chip shop wrapper. A systemic approach is the key to ensuring that we start to address that balance.
Q
I would like to turn to a one or two points that came up in questioning, and then I would like to probe a couple of points that did not. Dame Rachel mentioned advocacy and ensuring that the voice of particular groups—in this context, particularly that of children—is heard. In that context, I would like to have a look at clause 140, which relates to super-complaints. Subsection (4) says that the Secretary of State can, by regulations, nominate which organisations are able to bring super-complaints. These are complaints whereby you go to Ofcom and say that there is a particular company that is failing in its systemic duties.
Subsection (4) makes it clear that the entities nominated to be an authorised super-complainant would include
“a body representing the interests of users of regulated services”,
which would obviously include children. If an organisation such as the Office of the Children’s Commissioner or the NSPCC—I am obviously not prejudicing the future process—were designated as a super-complainant that was able to bring super-complaints to Ofcom, would that address your point about the need for proper advocacy for children?
Dame Rachel de Souza: Absolutely. I stumbled over that a bit when Maria asked me the question, but we absolutely need people who work with children, who know children and are trusted by children, and who can do that nationally in order to be the super-complainants. That is exactly how I would envisage it working.
Andy Burrows: The super-complaint mechanism is part of the well-established arrangements that we see in other sectors, so we are very pleased to see that that is included in the Bill. I think there is scope to go further and look at how the Bill could mirror the arrangements that we see in other sectors—I mentioned the energy, postal and water sectors earlier as examples—so that the statutory user advocacy arrangements for inherently vulnerable children, including children at risk of sexual abuse, mirror the arrangements that we see in those other sectors. That is hugely important as a point of principle, but it is really helpful and appropriate for ensuring that the legislation can unlock the positive regulatory outcomes that we all want to see, so I think it contributes towards really effective regulatory design.
Q
Dame Rachel de Souza: Yes, and I was so pleased to see that. The regulator needs to have teeth for it to have any effect—I think that is what we are saying. I want named senior managers to be held accountable for breaches of their safety duties to children, and I think that senior leaders should be liable to criminal sanctions when they do not uphold their duty of care to children.
Q
I will put my last two questions together. Are you concerned about the possibility that encryption in messaging services might impede the automatic scanning for child exploitation and abuse images that takes place, and would you agree that we cannot see encryption happen at the expense of child safety? Secondly, in the context of the Molly Russell reference earlier, are you concerned about the way that algorithms can promote and essentially force-feed children very harmful content? Those are two enormous questions, and you have only two minutes to answer them, so I apologise.
Dame Rachel de Souza: I am going to say yes and yes.
Andy Burrows: I will say yes and yes as well. The point about end-to-end encryption is hugely important. Let us be clear: we are not against end-to-end encryption. Where we have concerns is about the risk profile that end-to-end encryption introduces, and that risk profile, when we are talking about it being introduced into social networking services and bundled with other sector functionality, is very high and needs to be mitigated.
About 70% of child abuse reports that could be lost with Meta going ahead. That is 28 million reports in the past six months, so it is very important that the Bill can require companies to demonstrate that if they are running services, they can acquit themselves in terms of the risk assessment processes. We really welcome the simplified child sexual exploitation warning notices in the Bill that will give Ofcom the power to intervene when companies have not demonstrated that they have been able to introduce end-to-end encryption in a safe and effective way.
One area in which we would like to see the Bill—
Order. I am afraid that brings us to the end of the time allotted for the Committee to ask questions of this panel. On behalf of the Committee, I thank our witnesses for their evidence, and I am really sorry that we could not get Lynn Perry online. Could we move on to the last panel? Thank you very much.
Examination of Witnesses
Ben Bradley and Katy Minshall gave evidence.
We will now hear from Ben Bradley, government relations and public policy manager at TikTok, and Katy Minshall, head of UK public policy at Twitter. We have until 11.25 for this panel of witnesses. Could the witnesses please introduce themselves for the record?
Ben Bradley: I am Ben Bradley. I am a public policy manager at TikTok, leading on the Bill from TikTok.
Katy Minshall: I am Katy Minshall. I am head of UK public policy for Twitter.
Q
Katy Minshall: Thank you for inviting me here to talk about the Online Safety Bill. On whether the Bill is workable in its current form, on the one hand, we have long been supportive of an approach that looks at overall systems and processes, which I think would capture some of the emerging technologies that you are talking about. However, we certainly have questions about how are aspects of the Bill would work in practice. To give you an example, one of the late additions to the Bill was about user verification requirements, which as I understand it means that all category 1 platforms will need to offer users the opportunity to verify themselves and, in turn, those verified users have the ability to turn off interaction from unverified users. Now, while we share the Government’s policy objective of giving users more control, we certainly have some workability questions.
Just to give you one example, let’s say this existed today, and Boris Johnson turned on the feature. In practice, that would mean one of two things. Either the feature is only applicable to users in the UK, meaning that people around the world—in France, Australia, Germany or wherever it may be—are unable to interact with Boris Johnson, and only people who are verified in the UK can reply to him, tweet at him and so on, or it means the opposite and anyone anywhere can interact with Boris Johnson except those people who have chosen not to verify their identity, perhaps even in his own constituency, who are therefore are at a disadvantage in being able to engage with the Prime Minister. That is just one illustration of the sorts of workability questions we have about the Bill at present.
Q
Katy Minshall: I am sorry, do you mean—
Q
Katy Minshall: At present, what would be expected of companies in that scenario is not entirely clear in the Bill. There are certainly examples of content we have removed over the years for abuse and hateful conduct where the account owner that we suspended would have grounds to say, “Actually, this is content of democratic importance.” At the very least, it is worth pointing out that, in practice, it is likely to slow down our systems because we would have to build in extra steps to understand if a tweet or an account could be considered content of democratic importance, and we would therefore treat it differently.
Q
Katy Minshall: That is a really important question. At present, the Bill envisages that we would treat journalistic content differently from other types of content. I think the definition in the Bill—correct me if I get this wrong—is content for the purposes of journalism that is UK linked. That could cover huge swathes of the conversation on Twitter—links to blog posts, citizen journalists posting, front pages of news articles. The Bill envisages our having a system to separate that content from other content, and then treating that content differently. I struggle to understand how that would work in practice, especially when you layer on top the fact that so much of our enforcement is assisted by technology and algorithms. Most of the abusive content we take down is detected using algorithms; we suspend millions of spam accounts every day using automated systems. When you propose to layer something so ambiguous and complicated on top of that, it is worth considering how that might impact on the speed of enforcement across all of our platform.
Q
Katy Minshall: At present, we label a number of accounts as Government actors or state-affiliated media and we take action on those accounts. We take down their tweets and in some cases we do not amplify their content because we have seen in current situations that some Governments are sharing harmful content. Again, I question the ambiguity in the Bill and how it would interact with our existing systems that are designed to ensure safety on Twitter.
Q
Katy Minshall: Until we see the full extent of the definitions and requirements, it is difficult to say exactly what approach we would take under the Bill. Regarding adult content, Twitter is not a service targeting a youth audience, and as you illustrate, we endeavour to give people the ability to express themselves as they see fit. That has to be balanced with the objective of preventing young people from inadvertently stumbling on such content.
Q
Katy Minshall: We find that, in practice, the overwhelming majority of our user base are over the age of 18; both internal and external data show that. Of course young people can access Twitter. I think we have to be very careful that the Bill does not inadvertently lock children out of services they are entitled to use. I am sure we can all think of examples of people under the age of 18 who have used Twitter to campaign, for activism and to organise; there are examples of under-18s using Twitter to that effect. But as I say, predominantly we are not a service targeting a youth audience.
Q
Ben Bradley: Speaking for TikTok, we view ourselves as a second-generation platform. We launched in 2018, and at that time when you launched a product you had to make sure that safety was at the heart of it. I think the Secretary of State herself has said that the Bill process actually predates the launch of TikTok in the UK.
We view ourselves as an entertainment platform and to express yourself, enjoy yourself and be entertained you have to feel safe, so I do not think we would be seen as kicking and screaming under this regime. It is something that we have supported for a long time and we are regulated by Ofcom under the video-sharing platform, or VSP, regime. What the Bill will achieve is to raise the floor of industry standards, a bit like GDPR did for data, so that for all the companies in the future—to Alex’s point, this is about the next five and 10 years—there will be a baseline of standards that everyone must comply with and expectations that you will be regulated. Also, it takes a lot of these difficult decisions about the balance between safety and expression, privacy and security out of the hands of tech companies and into the hands of a regulator that, of course, will have democratic oversight.
Katy Minshall: I do not have very much more to add. We already engage positively with Ofcom. I remember appearing before a Select Committee back in 2018 or 2019 and at that point saying that we were absolutely supportive of Ofcom taking in this role and regulation potentially being a game changer. We are supportive of the systems and processes approach and look forward to engaging constructively in the regulation.
Q
Katy Minshall: I am glad you asked that question. The problem with the Bill is it depends on so many things that do not exist yet. We are looking at the Bill and thinking how we can prepare and start thinking about what is necessary, but in practice, content that is harmful to adults and harmful to children has not been set out yet. So much of the Bill depends on secondary legislation and codes of practice, and as I described earlier in the question from Alex Davies-Jones, there are such real workability questions around exemptions and ID verification that I worry there would be the risk of substantial delays at the other end, which I do not think anyone wants to see.
Ben Bradley: It is the same from our perspective. We have our community guidelines and we are committed to enforcing those at the moment. A lot of the detail of the Bill will be produced in Ofcom’s codes of practice but I think it is important we think about operationalising the process, what it looks like in practice and whether it is workable.
Something like Katy mentioned in terms of the user empowerment duties, how prescriptive those would be and how those would work, not just from the platforms of today but for the future, is really important. For TikTok, to use a similar example on the user empowerment duties, the intent is to discover content from all over the world. When you open the app, you are recommended content from all sorts of users and there is no expectation that those would be verified. If you have opted into this proposed user empowerment duty, there is a concern that it could exacerbate the risk of filter bubbles, because you would only be receiving content from users within the UK who have verified themselves, and we work very hard to make sure there is a diverse range of recommendations in that. I think it is a fairly easy fix. Much like elsewhere in the Bill, where Ofcom has flexibility about whether to require specific recommendations, they could have that flexibility in this case as well, considering whether this type of power works for these types of platforms.
To use the example of the metaverse, how would it work once the metaverse is up and running? The whole purpose of the metaverse is a shared environment in which users interact, and because the Bill is so prescriptive at the minute about how this user empowerment duty needs to be achieved, it is not clear, if you were verified and I were unverified and you had opted not to see my content but I moved something in the shared environment, like this glass, whether that would move for everyone. It is a small point, but it just goes to the prescriptiveness of how it is currently drafted and the importance of giving Ofcom the flexibility that it has elsewhere in the Bill, but in this section as well.
Q
Katy Minshall: At present, we follow the industry standard of age self-declaration. How you manage and verify identity—whether using a real-name system or emerging technologies like blockchain or documentation—is at the heart of a range of industries, not just ours.
Technology will change and new products that we cannot even envisage today will come on to the market. In terms of what we would do in relation to the Bill, as I said, until we see the full extent of the definitions and requirements, we cannot really say what exact approach we would take.
Q
Katy Minshall: My understanding of the Bill is that if there is a chance a young person could access your service, you would be expected to undertake the child safety duties, so my understanding is that that would be the case.
Q
Ben Bradley: We are a strictly 13-plus platform. There are basically two approaches to preventing under-age access to our platform. The first is preventing them from signing up. We are 12+ rated in the app stores, so if you have parental controls on those app stores, you cannot download the app. We also have a neutral age gate, which I think is similar to Twitter’s. We do not ask people to confirm whether they are over 13—we do not ask them to tick a box; instead we ask them to enter their date of birth. If they enter a date of birth that is under 13, they are blocked from re-entering date of birth, so they cannot just keep trying. We do not say that it is because they are under age; we just say, “TikTok isn’t right for you right now.” That is the first step.
Secondly, we proactively surface and remove under-age users. Whenever a piece of content is reported on TikTok, for whatever reason, the moderator will look at two things: the reason why it was reported and also whether the user is under 13. They can look at a range of signals to do that. Are they wearing a school uniform? Is there a birthday cake in their biography? Do they say that they are in a certain year of school? They can use those signals.
We actually publish every quarter how many suspected under-13s we remove from our platform. I think we are currently the only company to publish that on a quarterly basis, but we think it is important to be transparent about how we are approaching this, to give a sense of the efficacy of our interventions.
On what specifically might change, that is not clear; obviously, we have to wait for further guidance from Ofcom. However, we did carry out research last year with parents and young people in five countries across Europe, including the UK, where we tested different ideas of age assurance and verification, trying to understand what they would like to see. There was not really a single answer that everyone could get behind, but there were concerns raised around data protection and privacy if you were handing over this type of information to the 50 or 60 apps that might be on your phone.
One idea, which people generally thought was a good one, was that when you first get a device and first sign into the app store, you would verify your age there, and then that app store on that device could then pass an additional token to all the apps on your phone suggesting that you are of a certain age, so that we could apply an age-appropriate experience. Obviously that would not stop us doing everything that we currently do, but I think that would be a strong signal. If that were to move forward, we would be happy to explore that.
Q
Ben Bradley: TikTok does not take a filter bubble approach. When you first open the app, you express areas of content that you are interested in and then we recommend content. Because it is short-form, the key to TikTok’s success is sending you diverse content, which allows you to discover things that you might never have previously expressed interest in. I use the example of Nathan Evans, a postman who went on to have a No. 1 song with “Wellerman”, or even Eurovision, for example. These are things that you would not necessarily express interest in, but when they are recommended to you, you are engaged. Because it is short-form content, we cannot show you the same type of material over and over again—you would not be interested in seeing 10 30-second videos on football, for example. We intentionally try to diversify the feed to express those different types of interests.
Katy Minshall: Our algorithms down-rank harmful content. If you want to see an example live on Twitter, if you send a tweet and get loads of replies, there is a chunk that are automatically hidden at the bottom in a “view more replies” section. Our algorithm works in other ways as well to down-rank content that could be violating our rules. We endeavour to amplify credible content as well. In the explore tab, which is the magnifying glass, we will typically be directing you to credible sources of information—news websites and so on.
In terms of how the Bill would affect that, my main hope is that codes of practice go beyond a leave up or take down binary and beyond content moderation and think about the role of algorithms. At present on Twitter, you can turn the algorithm off in the top right-hand corner of the app, on the sparkle icon. In the long term, I think what we will be aiming for is a choice in the range of algorithms that you could use on services like Twitter. I would hope that the code of practice enables that and does not preclude is as a solution to some of the legal but harmful content we may have in mind.
Q
Katy Minshall: That is absolutely the case and it has been documented by numerous organisations and research. Social media mirrors society and society has the problems you have just described. In terms of how we ensure intersectionality in our policies and approaches, we are guided by our trust and safety council, which is a network of dozens of organisations around the world, 10 of which are here in the UK, and which represents different communities and different online harms issues. Alongside our research and engagement, the council ensures that when it comes to specific policies, we are constantly considering a range of viewpoints as we develop our safety solutions.
Q
Katy Minshall: At the very least, there must be tighter definitions. I am especially concerned when it comes to the news publisher exemption. The Secretary of State has indicated an amendment that would mean that services like Twitter would have to leave such content up while an appeals process is ongoing. There is no timeline given. The definition in the Bill of a news publisher is, again, fairly vague. If Ben and I were to set up a news website, nominally have some standards and an email address where people could send complaints, that would enable it to be considered a news publisher under the Bill. If we think about some of the accounts that have been suspended from social media over the years, you can absolutely see them creating a news website and saying, “I have a case to come back on,” to Twitter or TikTok or wherever it maybe.
Ben Bradley: We share those concerns. There are already duties to protect freedom of expression in clause 19. Those are welcome. It is the breadth of the definition of journalistic and democratic content that is a concern for us, particularly when it comes to things like the expediated and dedicated appeals mechanism, which those people would be able to claim if their content was removed. We have already seen people like Tommy Robinson on the far right present themselves as journalists or citizen journalists. Giving them access to a dedicated and expediated appeals mechanism is an area of concern.
There are different ways you could address that, such as greater clarity in those definitions and removing subjective elements. At the minute, it is whether or not a user considers their content to be journalistic; that it is not an objective criterion but about their belief about their content.
Also, if you look at something like the dedicated and expediated appeals mechanism, could you hold that in reserve so that if a platform were found to be failing in its duties to journalistic content or in its freedom of expression duties, Ofcom could say, like it can in other areas of the Bill, “Okay, we believe that you need to create this dedicated mechanism, because you have failed to protect those duties.”? That would, I think, minimise the risk for exploitation of that mechanism.
Q
Katy Minshall: We would certainly look to engage with Ofcom positively on the requirements it sets out. I am sorry to sound repetitive, but the challenge is that the Bill depends on so many things that do not exist yet and the definitions around what we mean by content harmful to adults or to children. In practice, that makes it challenging to say to you exactly today what approaches we would take. To be clear, we would of course look to continue working with the Government and now Ofcom with the shared objective of making the online space safer for everyone.
Q
Katy Minshall: The lesson of the past three or four years is that we cannot wait for the Bill. We at Twitter are continuing to make changes to our product and our policies to improve safety for everyone, including children.
Q
Katy Minshall: The Bill is a really important piece of regulation, which is why I was so pleased to come today and share our perspectives. We are continuing to engage positively with Ofcom. What I am trying to say is that until we see the full extent of the requirements and definitions, it is hard to set out exactly what steps we would take with regards to the Bill.
Ben Bradley: To add to that point, it is hard to be specific about some of the specific changes we would make because a lot of the detail of the Bill defers to Ofcom guidance and the codes of practice. Obviously we all have the duties around child safety and adult safety, but the Ofcom guidance will suggest specific measures that we can take to do that, some of which we may take already, and some of which may go further than what we already do. Once we see the details of the codes, we will be able to give a clearer answer.
Broadly from a TikTok perspective, through the design of the product and the way we approach safety, we are in a good place for when the new regime comes in, because we are regulated by Ofcom in the VSP regime, but we would have to wait for the full amount of detail. But outside some of the companies that you will hear from today, this will touch 20,000 companies and will raise the floor for all the companies that will be regulated under the regime.
Q
Ben Bradley: Yes, the codes of practice will recommend specific steps that we should take to achieve our duties. Until we see the detail of those codes it is hard to be specific about some of the changes that we would make.
Q
Katy Minshall: At present, we have a range of risk assessment processes. We have a risk committee of the board. We do risk assessments when we make a change about—
Q
Katy Minshall: At present, we do not have a specific individual designated to do the children’s risk assessment. The key question is how much does Ofcom’s guidance on risk assessments—once we see it—intersect with our current processes versus changes we would need to make to our risk assessment processes?
Q
Katy Minshall: I would have to go away and review the Bill. I do not know whether a specific level is set out in the Bill, but we would want to engage with the regulation and requirements set for companies such as Twitter. However it would be expected that is what we would—
Q
Katy Minshall: Already all the biggest decisions that we make as a company are signed off at the most senior level. We report to our chief executive, Parag Agrawal, and then to the board. As I say, there is a risk committee of the board, so I expect that we would continue to make those decisions at the highest level.
Ben Bradley: It is broadly the same from a TikTok perspective. Safety is a priority for every member of the team, regardless of whether they are in a specific trust and safety function. In terms of risk assessments, we will see from the detail of the Bill at what level they need to be signed off, but our CEO has been clear in interviews that trust and safety is a priority for him and everyone at TikTok, so it would be something to which we are all committed.
Do you think you would be likely to sign it off at the board level—
Q
Katy Minshall: As I say, we share your policy objective of giving users more choice. For example, at present we are testing a tool where Twitter automatically blocks abusive accounts on your behalf. We make the distinction based on an account’s behaviour and not on whether it has verified itself in some way.
Q
I do not think that the concept would necessarily operate as you suggested at the beginning. You suggested that people might end up not seeing content posted by the Prime Minister or another public figure. The concept is that, assuming a public figure would choose to verify themselves, content that they posted would be visible to everybody because they had self-verified. The content in the other direction may or may not be, depending on whether the Prime Minister or the Leader of the Opposition chose to see all content or just verified content, but their content—if they verified themselves—would be universally visible, regardless of whatever choice anyone else exercised.
Katy Minshall: Yes, sorry if I was unclear. I totally accept that point, but it would mean that some people would be able to reply to Boris Johnson and others would not. I know we are short on time, but it is worth pointing out that in a YouGov poll in April, nearly 80% of people said that they would not choose to provide ID documents to access certain websites. The requirements that you describe are based on the assumption that lots of people will choose to do it, when in reality that might not be the case.
A public figure might think, “Actually, I really appreciate that I get retweets, likes and people replying to my tweets,” but if only a small number of users have taken the opportunity to verify themselves, that is potentially a disincentive even to use this system in the first place—and all the while we were creating a system, we could have been investing in or trying to develop new solutions, such as safety mode, which I described and which tries to prevent abusive users from interacting with you.
Q
Ben, you talked about the age verification measures that TikTok currently takes. For people who do not come via an age-protected app store, it is basically self-declared. All somebody has to do is type in a date of birth. My nine-year-old children could just type in a date of birth that was four years earlier than their real date of birth, and off they would go on TikTok. Do you accept that that is wholly inadequate as a mechanism for policing the age limit of 13?
Ben Bradley: That is not the end of our age assurance system; it is just the very start. Those are the first two things that we have to prevent sign-up, but we are also proactive in surfacing and removing under-age accounts. As I said, we publish every quarter how many suspected under-13s get removed.
Q
Ben Bradley: It is based on a range of signals that they have available to them. As I said, we publish a number every quarter. In the last quarter, we removed 14 million users across the globe who were suspected to be under the age of 13. That is evidence of how seriously we take the issue. We publish that information because we think it is important to be transparent about our efforts in this space, so that we can be judged accordingly.
Q
Earlier, we debated content of democratic importance and the protections that that and free speech have in the Bill. Do you agree that a requirement to have some level of consistency in the way that that is treated is important, particularly given that there are some glaring inconsistencies in the way in which social media firms treat content at the moment? For example, Donald Trump has been banned, while flagrant disinformation by the Russian regime, lying about what they are doing in Ukraine, is allowed to propagate—including the tweets that I drew to your attention a few weeks ago, Katy.
Katy Minshall: I agree that freedom of expression should be top of mind as companies develop safety and policy solutions. Public interest should always be considered when developing policies. From the perspective of the Bill, I would focus on freedom of expression for everyone, and not limit it to content that could be related to political discussions or journalistic content. As Ben said, there are already wider freedom of expression duties in the Bill.
Q
Katy Minshall: Sorry, but I do not know the Bill in those terms, so you would have to tell me the definition.
Order. I am afraid that that brings us to the end of the time allotted for the Committee to ask questions in this morning’s sitting. On behalf of the Committee, I thank our witnesses for their evidence. We will meet again at 2 pm in this room to hear further oral evidence.