Online Safety Bill (Second sitting) Debate
Full Debate: Read Full DebateMaria Miller
Main Page: Maria Miller (Conservative - Basingstoke)Department Debates - View all Maria Miller's debates with the Department for Digital, Culture, Media & Sport
(2 years, 6 months ago)
Public Bill CommitteesYou are nodding, Ms Eagelton.
Jessica Eagelton: I agree with Professor McGlynn. Thinking about the broader landscape and intimate image abuse as well, I think there are some significant gaps. There is quite a piecemeal approach at the moment and issues that we are seeing in terms of enforcing measures on domestic abuse as well.
Q
Professor Clare McGlynn: You make a very good point about how, in essence, criminal offences are now going to play a key part in the obligations of platforms under this Bill. In general, historically, the criminal law has not been a friend to women and girls. The criminal law was not written, designed or interpreted with women’s harms in mind. That means that you have a very piecemeal, confusing, out-of-date criminal law, particularly as regards online abuse, yet that is the basis on which we have to go forward. That is an unfortunate place for us to be, but I think we can strengthen it.
We could strengthen schedule 7 by, for example, including trafficking offences. There are tens of thousands of cases of trafficking, as we know from yourselves and whistleblowers, that platforms could be doing so much more about, but that is not a priority offence. The Obscene Publications Act distribution of unlawful images offence is not included. That means that incest porn, for example, is not a priority offence; it could be if we put obscene publications in that provision. Cyber-flashing, which again companies could take a lot of steps to act against, is not listed as a priority offence. Blackmail—sexual extortion, which has risen exponentially during the pandemic—again is not listed as a priority offence.
Deepfake pornography is a rising phenomenon. It is not an offence in English law to distribute deepfake pornography at the moment. That could be a very straightforward, simple change in the Bill. Only a few words are needed. It is very straightforward to make that a criminal offence, thanks to Scots law, where it is actually an offence to distribute altered images. The way the Bill is structured means the platforms will have to go by the highest standard, so in relation to deepfake porn, it would be interpreted as a priority harm—assuming that schedule 7 is actually altered to include all the Scottish offences, and the Northern Irish ones, which are absent at the moment.
The deepfake example points to a wider problem with the criminal law on online abuse: the laws vary considerably across the jurisdictions. There are very different laws on down-blousing, deepfake porn, intimate image abuse, extreme pornography, across all the different jurisdictions, so among the hundreds of lawyers the platforms are appointing, I hope they are appointing some Scots criminal lawyers, because that is where the highest standard tends to be.
Q
Jessica Eagelton: I think something that will particularly help in this instance is having that broad code of practice; that is a really important addition that must be made to the Bill. Refuge is the largest specialist provider of gender-based violence services in the country. We have a specialist tech abuse team who specialise in technology-facilitated domestic abuse, and what they have seen is that, pretty consistently, survivors are being let down by the platforms. They wait weeks and weeks for responses—months sometimes—if they get a response at all, and the reporting systems are just not up to scratch.
I think it will help to have the broad code of practice that Janaya mentioned. We collaborated with others to produce a workable example of what that could look like, for Ofcom to hopefully take as a starting point if it is mandated in the Bill. That sets out steps to improve the victim journey through content reporting, for example. Hopefully, via the code of practice, a victim of deepfakes and other forms of intimate image abuse would be able to have a more streamlined, better response from platforms.
I would also like to say, just touching on the point about schedule 7, that from the point of view of domestic abuse, there is another significant gap in that: controlling and coercive behaviour is not listed, but it should be. Controlling and coercive behaviour is one of the most common forms of domestic abuse. It carries serious risk; it is one of the key aggravating factors for domestic homicide, and we are seeing countless examples of that online, so we think that is another gap in schedule 7.
Ms Walker?
Janaya Walker: Some of these discussions almost reiterate what I was saying earlier about the problematic nature of this, in that so much of what companies are going to be directed to do will be tied only to the specific schedule 7 offences. There have been lots of discussions about how you respond to some harms that reach a threshold of criminality and others that do not, but that really contrasts with the best practice approach to addressing violence against women and girls, which is really trying to understand the context and all of the ways that it manifests. There is a real worry among violence against women and girls organisations about the minimal response to content that is harmful to adults and children, but will not require taking such a rigorous approach.
Having the definition of violence against women and girls on the face of the Bill allows us to retain those expectations on providers as technology changes and new forms of abuse emerge, because the definition is there. It is VAWG as a whole that we are expecting the companies to address, rather than a changing list of offences that may or may not be captured in criminal law.
Q
Lulu Freemont: It is a great question. One of the biggest challenges is capacity. We hear quite a lot from the smaller tech businesses within our membership that they will have to divert their staff away from existing work to comply with the regime. They do not have compliance teams, and they probably do not have legal counsel. Even at this stage, to try to understand the Bill as it is currently drafted—there are lots of gaps—they are coming to us and saying, “What does this mean in practice?” They do not have the answers, or the capability to identify that. Attendant regulatory costs—thinking about the staff that you have and the cost, and making sure the regulation is proportionate to the need to divert away from business development or whatever work you might be doing in your business—are really fundamental.
Another real risk, and something in the Bill that smaller businesses are quite concerned about, is the potential proposal to extend the senior management liability provisions. We can understand them being in there to enable the regulators to do their job—information requests—but if there is any extension into individual pieces of content, coupled with a real lack of definitions, those businesses might find themselves in the position of restricting access to their services, removing too much content or feeling like they cannot comply with the regime in a proportionate way. That is obviously a very extreme case study. It will be Ofcom’s role to make sure that those businesses are being proportionate and understand the provisions, but the senior management liability does have a real, chilling impact on the smaller businesses within our membership.
Adam Hildreth: One of the challenges that we have seen over the last few years is that you can have a business that is small in revenue but has a huge global user base, with millions of users, so it is not really a small business; it just has not got to the point where it is getting advertisers and getting users to pay for it. I have a challenge on the definition of a small to medium-sized business. Absolutely, for start-ups with four people in a room—or perhaps even still just two—that do not have legal counsel or anything else, we need to make it simple for those types of businesses to ingest and understand what the principles are and what is expected of them. Hopefully they will be able to do quite a lot early on.
The real challenge comes when someone labels themselves as a small business but they have millions of users across the globe—and sometimes actually quite a lot of people working for them. Some of the biggest tech businesses in the world that we all use had tens of people working for them at one point in time, when they had millions of users. That is the challenge, because there is an expectation for the big-tier providers to be spending an awful lot of money, when the small companies are actually directly competing with them. There is a challenge to understanding the definition a small business and whether that is revenue-focused, employee-focused or about how many users it has—there may be other metrics.
Ian Stevenson: One of the key questions is how much staffing this will actually take. Every business in the UK that processes data is subject to GDPR from day one. Few of them have a dedicated data protection officer from day one; it is a role or responsibility that gets taken on by somebody within the organisation, or maybe somebody on the board who has some knowledge. That is facilitated by the fact that there are a really clear set of requirements there, and there are a lot of services you can buy and consume that help you deliver compliance. If we can get to a point where we have codes of practice that make very clear recommendations, then even small organisations that perhaps do not have that many staff to divert should be able to achieve some of the basic requirements of online safety by buying in the services and expertise that they need. We have seen with GDPR that many of those services are affordable to small business.
If we can get the clarity of what is required right, then the staff burden does not have to be that great, but we should all remember that the purpose of the Bill is to stop some of the egregiously bad things that happen to people as a result of harmful content, harmful behaviours and harmful contact online. Those things have a cost in the same way that implementing data privacy has a cost. To come back to Lulu’s point, it has to be proportionate to the business.
Q
Adam Hildreth: What we are seeing from the people that are getting really good at this and that really understand it is that they are treating this as a proper risk assessment, at a very serious level, across the globe. When we are talking about tier 1s, they are global businesses. When they do it really well, they understand risk and how they are going to roll out systems, technology, processes and people in order to address that. That can take time. Yes, they understand the risk, who it is impacting and what they are going to do about it, but they still need to train people and develop processes and maybe buy or build technology to do it.
We are starting to see that work being done really well. It is done almost in the same way that you would risk assess anything else: corporate travel, health and safety in the workplace—anything. It should really become one of those pillars. All those areas I have just gone through are regulated. Once you have regulation there, it justifies why someone is doing a risk assessment, and you will get businesses and corporates going through that risk assessment process. We are seeing others that do not do the same level of risk assessment and they do not have that same buy-in.
Q
Lulu Freemont: TechUK’s membership is really broad. We have cyber and defence companies in our membership, and large platforms and telcos. We speak on behalf of the sector. We would say that there is a real commitment to safety and security.
To bring it back to regulation, the risk-based approach is very much the right one—one that we think has the potential to really deliver—but we have to think about the tech ecosystem and its diversity. Lots of TechUK members are on the business-to-business side and are thinking about the role that they play in supporting the infrastructure for many of the platforms to operate. They are not entirely clear that they are exempt in the Bill. We understand that it is a very clear policy intention to exempt those businesses, but they do not have the level of legal clarity that they need to understand their role as access facilities within the tech.
That is just one example of a part of the sector that you would not expect to be part of this culture change or regulation but which is being caught in it slightly as an unintended consequence of legal differences or misinterpretations. Coming from that wide-sector perspective, we think that we need clarity on those issues to understand the different functionalities, and each platform and service will be different in their approach to this stuff.
Q
Ian Stevenson: I think you have to look at the change you are trying to effect. For many people in the sector, there is a lack of awareness about what happens when the need to consider safety in building features is not put first. Even when you realise how many bad things can happen online, if you do not know what to do about it, you tend not to be able to do anything about it.
If we want to change culture—it is the same for individual organisations as for the sector as a whole—we have to educate people on what the problem is and give them the tools to feel empowered to do something about it. If you educate and empower people, you remove the barrier to change. In some places, an extremely ethical people-centric and safety-focused culture very naturally emerges, but in others, less so. That is precisely where making it a first-class citizen in terms of risk assessment for boards and management becomes so important. When people see management caring about things, that gets pushed out through the organisations.
Q
“the safest place in the world to be online”?
Lulu Freemont: First, I want to outline that there are some strong parts in the Bill that the sector really supports. I think the majority of stakeholders would agree that the objectives are the right ones. The Bill tries to strike a balance between safety, free speech and encouraging innovation and investment in the UK’s digital economy. The approach—risk-based, systems-led and proportionate—is the right one for the 25,000 companies that are in scope. As it does not focus on individual pieces of content, it has the potential to be future-proof and to achieve longer-term outcomes.
The second area in the Bill that we think is strong is the prioritisation of illegal content. We very much welcome the clear definitions of illegal content on the face of the Bill, which are incredibly useful for businesses as they start to think about preparing for their risk assessment on illegal content. We really support Ofcom as the appropriate regulator.
There are some parts of the Bill that need specific focus and, potentially, amendments, to enable it to deliver on those objectives without unintended consequences. I have already mentioned a few of those areas. The first is defining harmful content in primary legislation. We can leave it to codes to identify the interpretations around that, but we need definitions of harmful content so that businesses can start to understand what they need to do.
Secondly, we need clarity that businesses will not be required to monitor every piece of content as a result of the Bill. General monitoring is prohibited in other regions, and we have concerns that the Online Safety Bill is drifting away from those norms. The challenges of general monitoring are well known: it encroaches on individual rights and could result in the over-removal of content. Again, we do not think that the intention is to require companies of all sizes to look at every piece of content on their site, but it might be one of the unintended consequences, so we would like an explicit prohibition of general monitoring on the face of the Bill.
We would like to remove the far-reaching amendment powers of the Secretary of State. We understand the need for technical powers, which are best practised within regulation, but taking those further so that the Secretary of State can amend the regime in such an extreme way to align with public policy is of real concern, particularly to smaller businesses looking to confidently put in place systems and processes. We would like some consideration of keeping senior management liability as it is. Extending that further is only going to increase the chilling impact that it is having and the environment it is creating within UK investment. The final area, which I have just spoken about, is clarifying the scope. The business-to-business companies in our membership need clarity that they are not in scope and for that intention to be made clear on the face of the Bill.
We really support the Bill. We think it has the potential to deliver. There are just a few key areas that need to be changed or amended slightly to provide businesses with clarity and reassurances that the policy intentions are being delivered on.
Adam Hildreth: To add to that—Lulu has covered absolutely everything, and I agree—the critical bit is not monitoring individual pieces of content. Once you have done your risk assessment and put in place your systems, processes, people and technology, that is what people are signing up for. They are not signing up for this end assessment where, because you find that one piece of harmful content exists, or maybe many, you have failed to abide by what you are really signing up to.
That is the worry from my perspective: that people do a full risk assessment, implement all the systems, put in place all the people, technology and processes that they need, do the best job they can and have understood what investment they are putting in, and someone comes along and makes a report to a regulator—Ofcom, in this sense—and says, “I found this piece of content there.” That may expose weaknesses, but the very best risk assessments are ongoing ones anyway, where you do not just put it away in a filing cabinet somewhere and say, “That’s done.” The definitions of online harms and harmful content change on a daily basis, even for the biggest social media platforms; they change all the time. There was talk earlier about child sexual abuse material that appears as cartoons, which would not necessarily be defined by certain legislation as illegal. Hopefully the legislation will catch up, but that is where that risk assessment needs to be made again, and policies may need to be changed and everything else. I just hope we do not get to the point where the individual monitoring of content, or content misses, is the goal of the Bill—that the approach taken to online safety is this overall one.
Q
Jared Sine: Sure. I would add a couple of thoughts. We run our own age verification scans, which we do through the traditional age gate but also through a number of other scans that we run.
Again, online dating platforms are a little different. We warn our users upfront that, as they are going to be meeting people in real life, there is a fine balance between safety and privacy, and we tend to lean a little more towards safety. We announce to our users that we are going to run message scans to make sure there is no inappropriate behaviour. In fact, one of the tools we have rolled out is called “Are you sure? Does this bother you?”, through which our AI looks at the message a user is planning to send and, if it is an inappropriate message, a flag will pop up that says, “Are you sure you want to send this?” Then, if they go ahead and send it, the person receiving it at the other end will get a pop-up that says, “This may not be something you want to see. Go ahead and click here if you want to.” If they open it, they then get another pop-up that asks “Does this bother you?” and, if it does, you can report the user immediately.
We think that is an important step to keep our platform safe. We make sure our users know that it is happening, so it is not under the table. However, we think there has to be a balance between safety and privacy, especially when we have users who are meeting in person. We have actually demonstrated on our platforms that this reduces harassment and behaviour that would otherwise be untoward or that you would not want on the platform.
We think that we have to be careful not to tie the hands of industry to be able to come up with technological solutions and advances that can work side by side with third-party tools and solutions. We have third-party ID verification tools that we use. If we identify or believe a user is under the age of 18, we push them through an ID verification process.
The other thing to remember, particularly as it relates to online dating, is that companies such as ours and Bumble have done the right thing by saying “18-plus only on our platforms”. There is no law that says that an online dating platform has to be 18-plus, but we think it is right thing to do. I am a father of five kids; I would not want kids on my platform. We are very vigilant in taking steps to make sure we are using the latest and greatest tools available to try to make sure that our platforms are safe.
Q
Dr Rachel O'Connell: I am the author of the technical standard PAS 1296, an age checking code of practice, which is becoming a global standard at the moment. We worked a lot with privacy and security and identity experts. It should have taken nine months, but it took a bit longer. There was a lot of thought that went into it. Those systems were developed to, as I just described, ensure a zero data, zero knowledge kind of model. What they do is enable those verifications to take place and reduce the requirement. There is a distinction between monitoring your systems, as was said earlier, for age verification purposes and abuse management. They are very different. You have to have abuse management systems. It is like saying that if you have a nightclub, you have to have bouncers. Of course you have to check things out. You need bouncers at the door. You cannot let people go into the venue, then afterwards say that you are spotting bad behaviour. You have to check at the door that they are the appropriate age to get into the venue.
Q
Dr Rachel O'Connell: I think you guys will be aware of the DCMS programme of work about the verification of children last year. As part of that, there was a piece of research that asked children what they would think about age verification. The predominant thing that came across from young children is that they are really tired of having to deal with weirdos and pervs. It is an everyday occurrence for them.
To just deviate slightly to the business model, my PhD is in forensics and tracking paedophile activity on the internet way back in the ’90s. At that time, guys would have to look for kids. Nowadays, on TikTok and various livestream platforms, the algorithms recognise that an individual—a man, for example—is very interested in looking at content produced by kids. The algorithms see that a couple of times and go, “You don’t have to look anymore. We are going to seamlessly connect you with kids who livestream. We are also going to connect you with other men that like looking at this stuff.”
If you are on these livestream sites at 3 o’clock in the morning, you can see these kids who are having sleepovers or something. They put their phone down to record whatever the latest TikTok dance is, and they think that they are broadcasting to other kids. You would assume that, but what they then hear is the little pops of love hearts coming on to the screen and guys’ voices saying, “Hey sweetie, you look really cute. Lick your lips. Spread your legs.” You know where I am going with this.
The Online Safety Bill should look at the systems and processes that underpin these platforms, because there is gamification of kids. Kids want to become influencers—maybe become really famous. They see the views counter and think, “Wow, there are 200 people looking at us.” Those people are often men, who will co-ordinate their activities at the back. They will push the boys a little bit further, and if a girl is on her own, they will see. If the child does not respond to the request, they will drop off. The kid will think, “Oh my God. Well, maybe I should do it this one time.”
What we have seen is a quadrupling of child sexual abuse material online that has been termed “self-generated”, because the individual offender hasn’t actually produced it. From a psychological perspective, it is a really bad name, but that is a separate topic. Imagine if that was your kid who had been coerced into something that had then been labelled as “self-generated”. The businesses models that underpin those processes that happen online are certainly something that should be really within scope.
We do not spend enough time thinking about the implications of the use of recommendation engines and so on. I think the idea of the VPN is a bit of a red herring. Children want safety. They do not want to have to deal with this sort of stuff online. There are other elements. If you were a child and felt that you might be a little bit fat, you could go on YouTube and see whether you could diet or something. The algorithms will pick that up also. There is a tsunami of dieting and thinspiration stuff. There is psychological harm to children as a result of the systems and processes that these companies operate.
There was research into age verification solutions and trials run with BT. Basically, the feedback from both parents and children was, “Why doesn’t this exist already?”. If you go into your native EE app where it says, “Manage my family” and put in your first name, last name and mobile number and your child’s first name, last name and date of birth, it is then verified that you are their parent. When the child goes on Instagram or TikTok, they put in their first and last name. The only additional data point is the parent’s mobile number. The parent gets a notification and they say yes or no to access.
There are solutions out there. As others have mentioned, the young people want them and the parents want them. Will people try to work around them? That can happen, but if it is a parent-initiated process or a child-initiated process, you have the means to know the age bands of the users. From a business perspective, it makes a lot of sense because you can have a granular approach to the offerings you give to each of your customers in different age bands.
Nima Elmi: Just to add to what Rachel has said, I think she has articulated extremely well the complexities of the issues around not only age verification, but business models. Ultimately, this is such a complex matter that it requires continued consultation across industry, experts and civil society to identity pragmatic recommendations for industry when it comes to not only verifying the age of their users, but thinking about the nuanced differences between platforms, purposes, functionality and business models, and what that means.
In the context of the work we do here at Bumble, we are clear about our guidelines requiring people to be 18-plus to download our products from app stores, as well as ensuring that we have robust moderation processes to identify and remove under-18s from our platforms. There is an opportunity here for the Bill to go further in providing clarity and guidance on the issue of accessibility of children to services.
Many others have said over the course of today’s evidence that there needs to be a bit more colour put into definitions, particularly when certain sections of the Bill refer to what constitutes a “significant number of users” for determining child accessibility to platforms. Coupled with the fact that age verification or assurance is a complex area in and of itself and the nuance between how social media may engage with it versus a dating or social networking platform, I think that more guidance is very much needed and a much more nuanced approach would be welcome.
I have three Members and the Minister to get in before 5 o’clock, so I urge brief questions and answers please.
Ms McDonald?
Rhiannon-Faye McDonald: I was just thinking of what I could add to what Susie has said. My understanding is that it is difficult to deal with cross-platform abuse because of the ability to share information between different platforms—for example, where a platform has identified an issue or offender and not shared that information with other platforms on which someone may continue the abuse. I am not an expert in tech and cannot present you with a solution to that, but I feel that sharing intelligence would be an important part of the solution.
Q
Susie Hargreaves: We are very clear that end-to-end encryption should be within scope, as you have heard from other speakers today. Obviously, the huge threat on the horizon is the end-to-end encryption on Messenger, which would result in the loss of millions of images of child sexual abuse. In common with previous speakers, we believe that the technology is there. We need not to demonise end-to-end encryption, which in itself is not bad; what we need to do is ensure that children do not suffer as a consequence. We must have mitigations and safety mechanisms in place so that we do not lose these child sexual abuse images, because that means that we will not be able to find and support those children.
Alongside all the child protection charities, we are looking to ensure that protections equivalent to the current ones are in place in the future. We do not accept that the internet industry cannot put them in place. We know from experts such as Dr Hany Farid, who created PhotoDNA, that those mechanisms and protections exist, and we need to ensure that they are put in place so that children do not suffer as a consequence of the introduction of end-to-end encryption. Rhiannon has her own experiences as a survivor, so I am sure she would agree with that.
Rhiannon-Faye McDonald: I absolutely would. I feel very strongly about this issue, which has been concerning me for quite some time. I do not want to share too much, but I am a victim of online grooming and child sex abuse. There were images and videos involved, and I do not know where they are and who has seen them. I will never know that. I will never have any control over it. It is horrifying. Even though my abuse happened 19 years ago, I still walk down the street wondering whether somebody has seen those images and recognises me from them. It has a lifelong impact on the child, and it impacts on recovery. I feel very strongly that if end-to-end encryption is implemented on platforms, there must be safeguards in place to ensure we can continue to find and remove these images, because I know how important that is to the subject of those images.
Q
Susie Hargreaves: We just want to make sure that the ability to scan in an end-to-end encrypted environment is included in the Bill in some way.
Q
Susie Hargreaves: I think with technology you can never stand still. We do not know what is coming down the line. We have to deal with the here and now, but we also need to be prepared to deal with whatever comes down the line. The answer, “Okay, we will just get people to report,” is not a good enough replacement for the ability to scan for images.
When the privacy directive was introduced in Europe and Facebook stopped scanning for a short period, we lost millions of images. What we know is that we must continue to have those safety mechanisms in place. We need to work collectively to do that, because it is not acceptable to lose millions of images of child sexual abuse and create a forum where people can safely share them without any repercussions, as Rhiannon says. One survivor we talked to in this space said that one of her images had been recirculated 70,000 times. The ability to have a hash of a unique image, go out and find those duplicates and make sure they are removed means that people are not re-victimised on a daily basis. That is essential.