Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Harding of Winscombe
Main Page: Baroness Harding of Winscombe (Conservative - Life peer)Department Debates - View all Baroness Harding of Winscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 6 months ago)
Lords ChamberMy Lords, I rise to introduce this group. On Tuesday in Committee, I said that having reached day 8 of the Committee we had all found our roles; now, I find myself in a different role. The noble Baroness, Lady Kidron, is taking an extremely well-earned holiday and was never able to be in the House today. She has asked me to introduce this group and specifically to speak to Amendment 125 in her name.
I strongly support all the amendments in the group, particularly those that would result in a review, but will limit my words to Amendment 125. I also thank the other co- signatories, the noble Baroness, Lady Finlay, who is in her place, and my noble friend Lord Sarfraz, who made such a compelling speech at Second Reading on the need for the Bill to consider emerging technologies but who is also, sadly, abroad, on government business.
I start with something said by Lord Puttnam, and I paraphrase: that we were forbidden from incorporating the word “digital” throughout the whole process of scrutiny of the communications Act in 2002. As a number of us observed at the time, he said, it was a terrible mistake not to address or anticipate these issues when it was obvious that we would have to return to it all at some later date. The Online Safety Bill is just such a moment: “Don’t close your eyes and hope”, he said, “but look to the future and make sure that it is represented in the Bill”.
With that in mind, this amendment is very modest. I will be listening carefully, as I am sure the noble Baroness, Lady Kidron, will from a distance, to my noble friend the Minister because if each aspect of this amendment is already covered in the Bill, as I suspect he will want to say, then I would be grateful if he could categorically explain how that is the case at the Dispatch Box, in sufficient detail that a future court of law can clearly understand it. If he cannot state that then I will be asking the House, as I am sure the noble Baroness, Lady Kidron, would, to support the amendment’s inclusion in the Bill.
There are two important supporters of this amendment. If the Committee will forgive me, I want to talk briefly about each of them because of the depth of understanding of the issues they have. The first is an enforcement officer who I shall not name, but I and the noble Baroness, Lady Kidron, want to thank him and his team for the extraordinary work that they do, searching out child sexual abuse in the metaverse. The second, who I will come to in a little bit, is Dr Geoff Hinton, the inventor of the neural network and most often referred to as “the godfather of AI”, whom the noble Baroness, Lady Kidron, met last week. Both are firm supporters of this amendment.
The amendment is part of a grouping labelled future-proofing but, sadly, this is not in the future. It is with us now. The rise of child sexual abuse in the metaverse is growing phenomenally. Two months ago, at the behest of the Institution of Engineering and Technology, the noble Baroness, Lady Kidron, hosted a small event at which members of a specialist police unit explained to colleagues from both Houses that what they were finding online was amongst the worst imaginable, but was not adequately caught by existing laws. I should just warn those listening to or reading this—I am looking up at the Public Gallery, where I see a number of young people listening to us—that I am about to briefly recount some really horrific stuff from what we saw and heard.
The quality of AI imagery is now at the point where a realistic AI image of a child can be produced. Users are able to produce or order indecent AI images, based on a child known to them. Simply by uploading a picture of a next door neighbour’s child or a family member, or taking a child’s image from social media and putting that face on existing abuse images, they can create a body for that picture or, increasingly, make it 3D and take it into an abuse room. The type of imagery produced can vary from suggestive or naked to penetrative sex; for the most part, I do not think I should be repeating in this Chamber the scenarios that play out.
VR child avatars can be provided with a variety of bespoke abuse scenarios, which the user can then interact with. Tailor-made VR experiences are being advertised for production on demand. They can be made to meet specific fetishes or to feature a specific profile of a child. The production of these VR abuse images is a commercial venture. Among the many chilling facts we learned was that the Oculus Meta Quest 2, which is the best-selling VR headset in the UK, links up to an app that is downloaded on to the user’s mobile phone. Within that app, the user can search for other users to follow and engage with—either through the VR headset or via instant messaging in their mobile app. A brief search through the publicly viewable user profiles on this app shows a huge number of profiles with usernames indicative of a sexual interest in children.
Six weeks after the event, the noble Baroness, Lady Kidron, spoke to the same officer. He said that already the technology was a generation on—in just six weeks. The officer made a terrible and terrifying prediction: he said that in a matter of months this violent imagery, based on and indistinguishable from an actual known child, will evolve to include moving 3D imagery and that at that point, the worlds of VR and AI will meet and herald a whole new phase in offending. I will quote this enforcement officer. He said:
“I hate to think where we will be in six months from now”.
While this group is labelled as future-proofing the Bill, I remind noble Lords that in six months’ time, the provisions of the Bill will not have been implemented. So this is not about the future; it is actually about the now.
My Lords, like others, I thank the Whips for intervening to protect children from hearing details that are not appropriate for the young. I have to say that I was quite relieved because I was rather squirming myself. Over the last two days of Committee, I have been exposed to more violent pornographic imagery than any adult, never mind a child, should be exposed to. I think we can recognise that this is certainly a challenging time for us.
I do not want any of the comments I will now make to be seen as minimising understanding of augmented reality, AI, the metaverse and so on, as detailed so vividly by the noble Baronesses, Lady Harding and Lady Finlay, in relation to child safety. However, I have some concerns about this group, in terms of proportionality and unintended outcomes.
Amendment 239, in the names of the right reverend Prelate the Bishop of Oxford, the noble Lord, Lord Clement-Jones, and the noble Viscount, Lord Colville of Culross, sums up some of my concerns about a focus on future-proofing. This amendment would require Ofcom to produce reports about future risks, which sounds like a common-sense demand. But my question is about us overly focusing on risk and never on opportunities. There is a danger that the Bill will end up recommending that we see these new technologies only in a negative way, and that we in fact give more powers to expand the scope for harmful content, in a way that stifles speech.
Beyond the Bill, I am more generally worried about what seems to be becoming a moral panic about AI. The precautionary principle is being adopted, which could mean stifling innovation at source and preventing the development of great technologies that could be of huge benefit to humanity. The over-focus on the dangers of AI and augmented reality could mean that we ignore the potential large benefits. For example, if we have AI, everyone could have an immediately responsive GP in their pocket—goodness knows that, for those trying to get an appointment, that could be of great use and benefit. It could mean that students have an expert tutor in every subject, just one message away. The noble Baroness, Lady Finlay, spoke about the fantastic medical breakthroughs that augmented reality can bring to handling neurological damage. Last night, I cheered when I saw how someone who has never been able to walk now can, through those kinds of technologies. I thought, “Isn’t this a brilliant thing?” So all I am suggesting is that we have to be careful that we do not see these new technologies only as tools for the most perverted form of activity among a small minority of individuals.
I note, with some irony, that fewer qualms were expressed by noble Lords about the use of AI when it was proposed to scan and detect speech or images in encrypted messages. As I argued at the time, this would be a threat to WhatsApp, Signal and so on. Clauses 110 and 124 have us using AI as a blunt proactive technology of surveillance, despite the high risks of inaccuracy, error and false flags. But there was great enthusiasm for AI then, when it was having an impact on individuals’ freedom of expression—yet, here, all we hear are the negatives. So we need to be balanced.
I am also concerned about Amendment 125, which illustrates the problem of seeing innovation only as a threat to safety and a potential problem. For example, if the Bill considers AI-generated content to be user-generated content, only large technology companies will have the resources—lawyers and engineers—necessary to proceed while avoiding crippling liability.
In practice, UK users risk being blocked out from new technologies if we are not careful about how we regulate here. For example, users in the European Union currently cannot access Google Bard AI assistant because of GDPR regulations. That would be a great loss because Google Bard AI is potentially a great gain. Despite the challenges of the likes of ChatGPT and Bard AI that we keep reading about, with people panicking that this will lead to wide-scale cheating in education and so on, this has huge potential as a beneficial technology, as I said.
I have mentioned that one of the unintended consequences—it would be unintended—of the whole Bill could be that the UK becomes a hostile environment for digital investment and innovation. So start-ups that have been invested in—like DeepMind, a Google-owned and UK-based AI company—could be forced to leave the UK, doing huge damage to the UK’s digital sector. How can the UK be a science and technology superpower if we end up endorsing anti-innovation, anti-progress and anti-business measures by being overly risk averse?
I have the same concerns about Amendment 286, which requires periodic reviews of new technology content environments such as the metaverse and other virtual augmented reality settings. I worry that it will not be attractive for technology companies to confidently invest in new technologies if there is this constant threat of new regulations and new problems on the horizon.
I have a query that mainly relates to Amendment 125 but that is also more general. If virtual augmented reality actually involves user-to-user interaction, like in the metaverse, is it not already covered in the Bill? Why do we need to add it in? The noble Baroness, Lady Harding, said that it has got to the point where we are not able to distinguish fake from real, and augmented reality from reality. But she concludes that that means that we should treat fake as real, which seems to me to rather muddy the waters and make it a fait accompli. I personally—
I am sorry to interrupt, but I will make a clarification; the noble Baroness is misinterpreting what I said. I was actually quoting the godfather of AI and his concerns that we are fast approaching a space where it will be impossible—I did not say that it currently is—to distinguish between a real child being abused and a machine learning-generated image of a child being abused. So, first, I was quoting the words of the godfather of AI, rather than my own, and, secondly, he was looking forward—only months, not decades—to a very real and perceived threat.
I personally think that it is pessimistic view of the future to suggest that humanity cannot rise to the task of being able to distinguish between deep fakes and real images. Organising all our lives, laws and liberties around the deviant predilections of a minority of sexual offenders on the basis that none of us will be able to tell the difference in the future, when it comes to that kind of activity, is rather dangerous for freedom and innovation.
I am grateful to the noble Baroness. That is very helpful.
That is exactly the same issue with child sexual abuse images—it is about the way in which criminal law is written. Not surprisingly, it is not up to date with evolution of technology.
I am grateful for that intervention as well. That summarises the core questions that we have for the Minister. Of the three areas that we have for him, the first is the question of scope and the extent to which he can assure us that the Bill as drafted will be robust in covering the metaverse and bots, which are the issues that have been raised today. The second is on behaviours and to the two interventions that we have just had. We have been asking whether, with the behaviours that are criminal today, that criminality will stretch to new, similar forms of behaviour taking place in new environments—let us put it that way. The behaviour, the intent and the harm are the same, but the environment is different. We want to understand the extent to which the Government are thinking about that, where that thinking is happening and how confident they are that they can deal with that.
Finally, on the question of agency, how do the Government expect to deal with the fact that we will have machines operating in a user-to-user environment when the connection between the machine and another individual user is qualitatively different from anything that we have seen before? Those are just some small questions for the Minister on this Thursday afternoon.
I certainly concur that we should discuss the issue in greater detail. I am very happy to do so with the noble Lord, the noble Baroness and others who want to do so, along with officials. If we can bring some worked examples of what “in control” and “out of control” bots may be, that would be helpful.
I hope the points I have set out in relation to the other issues raised in this group and the amendments before us are satisfactory to noble Lords and that they will at this point be content not to press their amendments.
My Lords, I thank all noble Lords who have contributed to a thought-provoking and, I suspect, longer debate than we had anticipated. At Second Reading, I think we were all taken aback when this issue was opened up by my noble friend Lord Sarfraz; once again, we are realising that this requires really careful thought. I thank my noble friend the Minister for his also quite long and thoughtful response to this debate.
I feel that I owe the Committee a small apology. I am very conscious that I talked in quite graphic detail at the beginning when there were still children in the Gallery. I hope that I did not cause any harm, but it shows how serious this is that we have all had to think so carefully about what we have been saying—only in words, without any images. We should not underestimate how much this has demonstrated the importance of our debates.
On the comments of the noble Baroness, Lady Fox, I am a huge enthusiast, like the noble Lord, Lord Knight, for the wonders of the tech world and what it can bring. We are managing the balance in this Bill to make sure that this country can continue to benefit from and lead the opportunities of tech while recognising its real and genuine harms. I suggest that today’s debate has demonstrated the potential harm that the digital world can bring.
I listened carefully—as I am certain the noble Baroness, Lady Kidron, has been doing in the digital world—to my noble friend’s words. I am encouraged by what he has put on the record on Amendment 125, but there are some specific issues that it would be helpful for us to talk about, as he alluded to, after this debate and before Report. Let me highlight a couple of those.
First, I do not really understand the technical difference between a customer service bot and other bots. I am slightly worried that we are defining in the specific one type of bot that would not be captured by this Bill. I suspect that there might be others in future. We must think carefully through whether we are getting too much into the specifics of the technology and not general enough in making sure we capture where it could go. That is one example.
Secondly, as my noble friend Lady Berridge would say, I am not sure that we have got to the bottom of whether this Bill, coupled with the existing body of criminal law, will really enable law enforcement officers to progress the cases as they see fit and protect vulnerable women—and men—in the digital world. I very much hope we can extend the conversation there. We perhaps risk getting too close to the technical specifics if we are thinking about whether a haptic suit is in or out of scope of the Bill; I am certain that there will be other technologies that we have not even thought about yet that we will want to make sure that the Bill can capture.
I very much welcome the spirit in which this debate has been held. When I said that I would do this for the noble Baroness, Lady Kidron, I did not realise quite what a huge debate we were opening up, but I thank everyone who has contributed and beg leave to withdraw the amendment.