(1 year, 4 months ago)
Lords ChamberMy Lords, interestingly, because I have not discussed this at all with the noble Lord, Lord Moylan, I have some similar concerns to his. I have always wanted this to be a children’s online safety Bill. My concerns generally have been about threats to adults’ free speech and privacy and the threat to the UK as the home of technological innovation. I have been happy to keep shtum on things about protecting children, but I got quite a shock when I saw the series of government amendments.
I thought what most people in the public think: the Bill will tackle things such as suicide sites and pornography. We have heard some of that very grim description, and I have been completely convinced by people saying, “It’s the systems”. I get all that. But here we have a series of amendments all about content—endless amounts of content and highly politicised, contentious content at that—and an ever-expanding list of harms that we now have to deal with. That makes me very nervous.
On the misinformation and disinformation point, the Minister is right. Whether for children or adults, those terms have been weaponised. They are often used to delegitimise perfectly legitimate if contrary or minority views. I say to the noble Baroness, Lady Kidron, that the studies that say that youth are the fastest-growing far-right group are often misinformation themselves. I was recently reading a report about this phenomenon, and things such as being gender critical or opposing the small boats arriving were considered to be evidence of far-right views. That was not to do with youth, but at least you can see that this is quite a difficult area. I am sure that many people even in here would fit in the far right as defined by groups such as HOPE not hate, whose definition is so broad.
My main concerns are around the Minister’s Amendment 172. There is a problem: because it is about protected characteristics—or apes the protected characteristics of the Equality Act—we might get into difficulty. Can we at least recognise that, even in relation to the protected characteristics as noted in the Equality Act, there are raging rows politically? I do not know how appropriate it is that the Minister has tabled an amendment dragging young people into this mire. Maya Forstater has just won a case in which she was accused of being opposed to somebody’s protected characteristics and sacked. Because of the protected characteristics of her philosophical views, she has won the case and a substantial amount of money.
I worry when I see this kind of list. It is not just inciting hatred—in any case, what that would mean is ambivalent. It refers to abuse based on race, religion, sex, sexual orientation, disability and so on. This is a minefield for the Government to have wandered into. Whether you like it or not, it will have a chilling effect on young people’s ability to debate and discuss. If you worry that some abuse might be aimed at religion, does that mean that you will not be able to discuss Charlie Hebdo? What if you wanted to show or share the Charlie Hebdo cartoons? Will that count? Some people would say that is abusive or inciteful. This is not where the Bill ought to be going. At the very least, it should not be going there at this late stage. Under race, it says that “nationality” is one of the indicators that we should be looking out for. Maybe it is because I live in Wales, but there is a fair amount of abuse aimed at the English. A lot of Scottish friends dole it out as well. Will this count for young people who do that? I cannot get it.
My final question is in relation to proposed subsection (11). This is about protecting children, yet it lists a person who
“has the characteristic of gender reassignment if the person is proposing to undergo, is undergoing or has undergone a process (or part of a process) for the purpose of reassigning the person’s sex by changing physiological or other attributes of sex”.
Are the Government seriously accepting that children have not just proposed to reassign but have been reassigned? That is a breach of the law. That is not meant to be happening. Your Lordships will know how bad this is. Has the Department for Education seen this? As we speak, it is trying to untangle the freedom for people not to have to go along with people’s pronouns and so on.
This late in the day, on something as genuinely important as protecting children, I just want to know whether there is a serious danger that this has wandered into the most contentious areas of political life. I think it is very dangerous for a government amendment to affirm gender reassignment to and about children. It is genuinely irresponsible and goes against the guidance the Government are bringing out at the moment for us to avoid. Please can the Minister clarify what is happening with Amendment 172?
My Lords, I am not entirely sure how to begin, but I will try to make the points I was going to make. First, I would like to respond to a couple of the things said by the noble Baroness, Lady Fox. With the greatest respect, I worry that the noble Baroness has not read the beginning of the proposed new clause in Amendment 172, subsection (2), which talks about “Content which is abusive”, as opposed to content just about race, religion or the other protected characteristics.
One of the basic principles of the Bill is that we want to protect our children in the digital world in the same way that we protect them in the physical world. We do not let our children go to the cinema to watch content as listed in the primary priority and priority content lists in my noble friend the Minister’s amendments. We should not let them in the digital world, yet the reality is that they do, day in and day out.
I thank my noble friend the Minister, not just for the amendments that he has tabled but for the countless hours that he and his team have devoted to discussing this with many of us. I have not put my name to the amendments either because I have some concerns but, given the way the debate has turned, I start by thanking him and expressing my broad support for having the harms in the Bill, the importance of which this debate has demonstrated. We do not want this legislation to take people by surprise. The important thing is that we are discussing some fundamental protections for the most vulnerable in our society, so I thank him for putting those harms in the Bill and for allowing us to have this debate. I fear that it will be a theme not just of today but of the next couple of days on Report.
I started with the positives; I would now like to bring some challenges as well. Amendments 171 and 172 set out priority content and primary priority content. It is clear that they do not cover the other elements of harm: contact harms, conduct harms and commercial harms. In fact, it is explicit that they do not cover the commercial harms, because proposed new subsection (4) in Amendment 237 explicitly says that no amendment can be made to the list of harms that is commercial. Why do we have a perfect crystal ball that means we think that no future commercial harms could be done to our children through user-to-user and search services, such that we are going to expressly make it impossible to add those harms to the Bill? It seems to me that we have completely ignored the commercial piece.
I move on to Amendment 174, which I have put my name to. I am absolutely aghast that the Government really think that age-inappropriate sexualised content does not count as priority content. We are not necessarily talking here about a savvy 17 year-old. We are talking about four, five and six year-olds who are doomscrolling on various social media platforms. That is the real world. To suggest that somehow the digital world is different from the old-fashioned cinema, and a place where we do not want to protect younger children from age-inappropriate sexualised material, just seems plain wrong. I really ask my noble friend the Minister to reconsider that element.
I am also depressed about the discussion that we had about misinformation. As I said in Committee several times, I have two teenage girls. The reality is that we are asking today’s teenagers to try to work out what is truth and what is misinformation. My younger daughter will regularly say, “Is this just something silly on the internet?” She does not use the term “misinformation”; she says, “Is that just unreal, Mum?” She cannot tell about what appears in her social media feeds because of the degree of misinformation. Failing to recognise that misinformation is a harm for young people who do not yet know how to validate sources, which was so much easier for us when we were growing up than it is for today’s generations, is a big glaring gap, even in the content element of the harms.
I support the principle behind these amendments, and I am pleased to see the content harms named. We will come back next week to the conduct and contact harms—the functionality—but I ask my noble friend the Minister to reconsider on both misinformation and inappropriate sexualised material, because we are making a huge mistake by failing to protect our children from them.
(1 year, 5 months ago)
Lords ChamberMy Lords, like others, I thank the Whips for intervening to protect children from hearing details that are not appropriate for the young. I have to say that I was quite relieved because I was rather squirming myself. Over the last two days of Committee, I have been exposed to more violent pornographic imagery than any adult, never mind a child, should be exposed to. I think we can recognise that this is certainly a challenging time for us.
I do not want any of the comments I will now make to be seen as minimising understanding of augmented reality, AI, the metaverse and so on, as detailed so vividly by the noble Baronesses, Lady Harding and Lady Finlay, in relation to child safety. However, I have some concerns about this group, in terms of proportionality and unintended outcomes.
Amendment 239, in the names of the right reverend Prelate the Bishop of Oxford, the noble Lord, Lord Clement-Jones, and the noble Viscount, Lord Colville of Culross, sums up some of my concerns about a focus on future-proofing. This amendment would require Ofcom to produce reports about future risks, which sounds like a common-sense demand. But my question is about us overly focusing on risk and never on opportunities. There is a danger that the Bill will end up recommending that we see these new technologies only in a negative way, and that we in fact give more powers to expand the scope for harmful content, in a way that stifles speech.
Beyond the Bill, I am more generally worried about what seems to be becoming a moral panic about AI. The precautionary principle is being adopted, which could mean stifling innovation at source and preventing the development of great technologies that could be of huge benefit to humanity. The over-focus on the dangers of AI and augmented reality could mean that we ignore the potential large benefits. For example, if we have AI, everyone could have an immediately responsive GP in their pocket—goodness knows that, for those trying to get an appointment, that could be of great use and benefit. It could mean that students have an expert tutor in every subject, just one message away. The noble Baroness, Lady Finlay, spoke about the fantastic medical breakthroughs that augmented reality can bring to handling neurological damage. Last night, I cheered when I saw how someone who has never been able to walk now can, through those kinds of technologies. I thought, “Isn’t this a brilliant thing?” So all I am suggesting is that we have to be careful that we do not see these new technologies only as tools for the most perverted form of activity among a small minority of individuals.
I note, with some irony, that fewer qualms were expressed by noble Lords about the use of AI when it was proposed to scan and detect speech or images in encrypted messages. As I argued at the time, this would be a threat to WhatsApp, Signal and so on. Clauses 110 and 124 have us using AI as a blunt proactive technology of surveillance, despite the high risks of inaccuracy, error and false flags. But there was great enthusiasm for AI then, when it was having an impact on individuals’ freedom of expression—yet, here, all we hear are the negatives. So we need to be balanced.
I am also concerned about Amendment 125, which illustrates the problem of seeing innovation only as a threat to safety and a potential problem. For example, if the Bill considers AI-generated content to be user-generated content, only large technology companies will have the resources—lawyers and engineers—necessary to proceed while avoiding crippling liability.
In practice, UK users risk being blocked out from new technologies if we are not careful about how we regulate here. For example, users in the European Union currently cannot access Google Bard AI assistant because of GDPR regulations. That would be a great loss because Google Bard AI is potentially a great gain. Despite the challenges of the likes of ChatGPT and Bard AI that we keep reading about, with people panicking that this will lead to wide-scale cheating in education and so on, this has huge potential as a beneficial technology, as I said.
I have mentioned that one of the unintended consequences—it would be unintended—of the whole Bill could be that the UK becomes a hostile environment for digital investment and innovation. So start-ups that have been invested in—like DeepMind, a Google-owned and UK-based AI company—could be forced to leave the UK, doing huge damage to the UK’s digital sector. How can the UK be a science and technology superpower if we end up endorsing anti-innovation, anti-progress and anti-business measures by being overly risk averse?
I have the same concerns about Amendment 286, which requires periodic reviews of new technology content environments such as the metaverse and other virtual augmented reality settings. I worry that it will not be attractive for technology companies to confidently invest in new technologies if there is this constant threat of new regulations and new problems on the horizon.
I have a query that mainly relates to Amendment 125 but that is also more general. If virtual augmented reality actually involves user-to-user interaction, like in the metaverse, is it not already covered in the Bill? Why do we need to add it in? The noble Baroness, Lady Harding, said that it has got to the point where we are not able to distinguish fake from real, and augmented reality from reality. But she concludes that that means that we should treat fake as real, which seems to me to rather muddy the waters and make it a fait accompli. I personally—
I am sorry to interrupt, but I will make a clarification; the noble Baroness is misinterpreting what I said. I was actually quoting the godfather of AI and his concerns that we are fast approaching a space where it will be impossible—I did not say that it currently is—to distinguish between a real child being abused and a machine learning-generated image of a child being abused. So, first, I was quoting the words of the godfather of AI, rather than my own, and, secondly, he was looking forward—only months, not decades—to a very real and perceived threat.
I personally think that it is pessimistic view of the future to suggest that humanity cannot rise to the task of being able to distinguish between deep fakes and real images. Organising all our lives, laws and liberties around the deviant predilections of a minority of sexual offenders on the basis that none of us will be able to tell the difference in the future, when it comes to that kind of activity, is rather dangerous for freedom and innovation.
(1 year, 6 months ago)
Lords ChamberMy Lords, I understand that, for legislation to have any meaning, it has to have some teeth and you have to be able to enforce it; otherwise, it is a waste of time, especially with something as important as the legislation that we are discussing here.
I am a bit troubled by a number of the themes in these amendments and I therefore want to ask some questions. I saw that the Government had tabled these amendments on senior manager liability, then I read amendments from both the noble Lord, Lord Bethell, and the Labour Party, the Opposition. It seemed to me that even more people would be held liable and responsible as a result. I suppose I have a dread that—even with the supply chain amendment—this means that lots of people are going to be sacked. It seems to me that this might spiral dangerously out of control and everybody could get caught up in a kind of blame game.
I appreciate that I might not have understood, so this is a genuine attempt to do so. I am concerned that these new amendments will force senior managers and, indeed, officers and staff to take an extremely risk-averse approach to content moderation. They now have not only to cover their own backs but to avoid jail. One of my concerns has always been that this will lead to the over-removal of legal speech, and more censorship, so that is a question I would like to ask.
I also want to know how noble Lords think this will lie in relation to the UK being a science and technology superpower. Understandably, some people have argued that these amendments are making the UK a hostile environment for digital investment, and there is something to be balanced up there. Is there a risk that this will lead to the withdrawal of services from the UK? Will it make working for these companies unattractive to British staff? We have already heard that Jimmy Wales has vowed that the Wikimedia foundation will not scrutinise posts in the way demanded by the Bill. Is he going to be thrown in prison, or will Wikipedia pull out? How do we get the balance right?
What is the criminal offence that has a threat of a prison sentence? I might have misunderstood, but a technology company manager could fail to prevent a child or young person encountering legal but none the less allegedly harmful speech, be considered in breach of these amendments and get sent to prison. We have to be very careful that we understand what this harmful speech is, as we discussed previously. The threshold for harm, which encompasses physical and psychological harm, is vast and could mean people going to prison without the precise criminal offence being clear. We talked previously about VPNs. If a tech savvy 17-year-old uses a VPN and accesses some of this harmful material, will someone potentially be criminally liable for that young person getting around the law, find themselves accused of dereliction of duty and become a criminal?
My final question is on penalties. When I was looking at this Bill originally and heard about the eye-watering fines that some Silicon Valley companies might face, I thought, “That will destroy them”. Of course, to them it is the mere blink of an eye, and I do get that. This indicates to me, given the endless conversations we have had on whether size matters, that in this instance size does matter. The same kind of liabilities will be imposed not just on the big Silicon Valley monsters that can bear these fines, but on Mumsnet—or am I missing something? Mumsnet might not be the correct example, but could not smaller platforms face similar liabilities if a young person inadvertently encounters harmful material? It is not all malign people trying to do this; my unintended consequence argument is that I do not want to create criminals when a crime is not really being committed. It is a moral dilemma, and I do understand the issue of enforcement.
I rise very much to support the comments of my noble friend Lord Bethell and, like him, to thank the Minister for bringing forward the government amendments. I will try to address some of the comments the noble Baroness, Lady Fox, has just made.
One must view this as an exercise in working out how one drives culture change in some of the biggest and most powerful organisations in the world. Culture change is really hard. It is hard enough in a company of 10 people, let alone in a company with hundreds of thousands of employees across the world that has more money than a single country. That is what this Bill requires these enormous companies to do: to change the way they operate when they are looking at an inevitably congested, contested technology pipeline, by which I mean—to translate that out of tech speak—they have more work to do than even they can cope with. Every technology company, big or small, always has this problem: more good ideas than their technologists can cope with. They have to prioritise what to fix and what to implement. For the last 15 years, digital companies have prioritised things that drive income, but not the safety of our children. That requires a culture change from the top of the company.
(1 year, 6 months ago)
Lords ChamberI want to challenge the noble Baroness’s assertion that the Bill is not about children’s rights. Anyone who has a teenage child knows that their right to access the internet is keenly held and fought out in every household in the country.
The quip works, but political rights are not quips. Political rights have responsibilities, and so on. If we gave children rights, they would not be dependent on adults and adult society. Therefore, it is a debate; it is a row about what our rights are. Guess what. It is a philosophical row that has been going on all around the world. I am just suggesting that this is not the place—
(1 year, 6 months ago)
Lords ChamberMy Lords, I too support this amendment. I was at a dinner last night in the City for a group of tech founders and investors—about 500 people in a big hotel ballroom, all focused on driving the sort of positive technology growth in this country that I think everyone wants to see. The guest speaker runs a large UK tech business. He commented in his speech that tech companies need to engage with government because—he said this as if it was a revelation—all Governments turned out not to speak with one voice and that understanding what was required of tech companies by Governments is not always easy. Business needs clarity, and anyone who has run a large or small business knows that it is not really the clarity in the detail that matters but the clarity of purpose that enables you to lead change, because then your people understand why they need to change, and if they understand why, then in each of the micro-decisions they take each day they can adjust those decisions to fit with the intent behind your purpose. That is why this amendment is so important.
I have worked in this space of online safety for more than a decade, both as a technology leader and in this House. I genuinely do not believe that business is wicked and evil, but what it lacks is clear direction. The Bill is so important in setting those guardrails that if we do not make its purpose clear, we should not be surprised if the very businesses which really do want Governments to be clear do not know what we intend.
I suspect that my noble friend the Minister might object to this amendment and say that it is already in the Bill. As others have already said, I actually hope it is. If it is not, we have a different problem. The point of an upfront summary of purpose is to do precisely that: to summarise what is in what a number of noble Lords have already said is a very complicated Bill. The easier and clearer we can make it for every stakeholder to engage in the Bill, the better. If alternatively my noble friend the Minister objects to the detailed wording of this amendment, I argue that that simply makes getting this amendment right even more important. If the four noble Lords, who know far more about this subject than I will ever do in a lifetime, and the joint scrutiny committee, which has done such an outstanding job at working through this, have got the purposes of the Bill wrong, then what hope for the rest of us, let alone those business leaders trying to interpret what the Government want?
That is why it is so important that we put the purposes of the Bill absolutely at the front of the Bill, as in this amendment. If we have misunderstood that in the wording, I urge my noble friend the Minister to come back with wording on Report that truly encapsulates what the Government want.
My Lords, I welcome this opportunity to clarify the purposes of the Bill, but I am not sure that the amendment helps as my North Star. Like the Bill, it throws up as many questions as answers, and I found myself reading it and thinking “What does that word mean?”, so I am not sure that clarity was where I ended up.
It is not a matter of semantics, but in some ways you could say—and certainly this is as publicly understood—that the name of the Bill, the Online Safety Bill, gives it its chief purpose. Yet however well-intentioned, and whatever the press releases say or the headlines print, even a word such as “safety” is slippery, because safety as an end can be problematic in a free society. My worry about the Bill is unintended consequences, and that is not rectified by the amendment. As the Bill assumes safety as the ultimate goal, we as legislators face a dilemma. We have the responsibility of weighing up the balance between safety and freedom, but the scales in the Bill are well and truly weighted towards safety at the expense of freedom before we start, and I am again not convinced the amendment weights them back again.
Of course, freedom is a risky business, and I always like the opportunity to quote Karl Marx, who said:
“You cannot pluck the rose without its thorns!”
However, it is important to recognise that “freedom” is not a dirty word, and we should avoid saying that risk-free safety is more important than freedom. How would that conversation go with the Ukrainian people who risk their safety daily for freedom? Also, even the language of safety, or indeed what constitutes the harms that the Bill and the amendments promise to keep the public safe from, need to be considered in the cultural and social context of the norms of 2023. A new therapeutic ethos now posits safety in ever-expanding pseudo-psychological and subjective terms, and this can be a serious threat to free speech. We know that some activists often exploit that concept of safety to claim harm when they merely encounter views they disagree with. The language of safety and harm is regularly used to cancel and censor opponents—and the Government know that, so much so that they considered it necessary to introduce the Higher Education (Freedom of Speech) Bill to secure academic freedom against an escalating grievance culture that feigns harm.
Part of the triple shield is a safety duty to remove illegal content, and the amendment talks about speech within the law. That sounds unobjectionable—in my mind it is far better than “legal but harmful”, which has gone—but, while illegality might sound clear and obvious, in some circumstances it is not always clear. That is especially true in any legal limitations of speech. We all know about the debates around hate speech, for example. These things are contentious offline and even the police, in particular the College of Policing, seem to find the concept of that kind of illegality confusing and, at the moment, are in a dispute with the Home Secretary over just that.
Is it really appropriate that this Bill enlists and mandates private social media companies to judge criminality using the incredibly low bar of “reasonable grounds to infer”? It gets even murkier when the legal standard for permissible speech online will be set partly by compelling platforms to remove content that contravenes their terms and conditions, even if these terms of service restrict speech far more than domestic UK law does. Big tech is being incited to censor whatever content it wishes as long as it fits in with their Ts & Cs. Between this and determining, for example, what is in filters—a whole different issue—one huge irony here, which challenges one of the purposes of the Bill, is that despite the Government and many of us thinking that this legislation will de-fang and regulate big tech’s powers, actually the legislation could inadvertently give those same corporates more control of what UK citizens read and view.
Another related irony is that the Bill was, no doubt, designed with Facebook, YouTube, Twitter, Google, TikTok and WhatsApp in mind. However, as the Bill’s own impact assessment notes, 80% of impacted entities have fewer than 10 employees. Many sites, from Wikipedia to Mumsnet, are non-profit or empower their own users to make moderation or policy decisions. These sites, and tens of thousands of British businesses of varying sizes, perhaps unintentionally, now face an extraordinary amount of regulatory red tape. These onerous duties and requirements might be actionable if not desirable for larger platforms, but for smaller ones with limited compliance budgets they could prove a significant if not fatal burden. I do not think that is the purpose of the Bill, but it could be an unintended outcome. This also means that regulation could, inadvertently, act as barrier to entry to new SMEs, creating an ever more monopolistic stronghold for big tech, at the expense of trialling innovations or allowing start-ups to emerge.
I want to finish with the thorny issue of child protection. I have said from the beginning—I mean over the many years since the Bill’s inception—that I would have been much happier if it was more narrowly titled as the Children’s Online Safety Bill, to indicate that protecting children was its sole purpose. That in itself would have been very challenging. Of course, I totally agree with Amendment 1’s intention
“to provide a higher level of protection for children than for adults”.
That is how we treat children and adults offline.