Baroness Harding of Winscombe Portrait Baroness Harding of Winscombe (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I, too, support Amendments 233 and 234, and Amendment 233A, from the noble Lord, Lord Allan. As the noble Baroness, Lady Kidron, said, it has been made clear in the past 10 days of Committee that there is a role for every part of society to play to make sure that we see the benefits of the digital world but also mitigate the potential harms. The role that researchers and academics can play in helping us understand how the digital world operates is critical—and that is going to get ever more so as we enter a world of large language models and AI. Access to data in order to understand how digital systems and processes work will become even more important—next week, not just in 10 years’ time.

My noble friend Lord Bethell quite rightly pointed out the parallels with other regulators, such as the MHRA and the Bank of England. A number of people are now comparing the way in which the MHRA and other medical regulators regulate the development of drugs with how we ought to think about the emergence of regulation for AI. This is a very good read-across: we need to set the rules of the road for researchers and ensure, as the noble Baroness, Lady Kidron, said—nailing it, as usual—that we have the most transparent system possible, enabling people to conduct their research in the light, not in the grey zone.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, as the noble Baroness, Lady Kidron, said, clearly, transparency is absolutely one of the crucial elements of the Bill. Indeed, it was another important aspect of the Joint Committee’s report. Like the noble Lord, Lord Knight—a fellow traveller on the committee—and many other noble Lords, I much prefer the reach of Amendments 233 and 234, tabled by the noble Lord, Lord Bethell, to Amendment 230, the lead amendment in this group.

We strongly support amendments that aim to introduce a duty for regulated platforms to enable access by approved independent researchers to information and data from regulated services, under certain conditions. Of course, there are arguments for speeding up the process under Clause 146, but this is really important because companies themselves currently decide who accesses data, how much of it and for what purposes. Only the companies can see the full picture, and the effect of this is that it has taken years to build a really solid case for this Online Safety Bill. Without a greater level of insight, enabling quality research and harm analysis, policy-making and regulatory innovation will not move forward.

I was very much taken by what the noble Baroness, Lady Harding, had to say about the future in terms of the speeding up of technological developments in AI, which inevitably will make the opening up of data, and research into it, of greater and greater importance. Of course, I also take extremely seriously my noble friend’s points about the need for data protection. We are very cognisant of the lessons of Cambridge Analytica, as he mentioned.

It is always worth reading the columns of the noble Lord, Lord Hague. He highlighted this issue last December, in the Times. He said:

“Social media companies should be required to make anonymised data available to third-party researchers to study the effect of their policies. Crucially, the algorithms that determine what you see—the news you are fed, the videos you are shown, the people you meet on a website—should not only be revealed to regulators but the choices made in crafting them should then be open to public scrutiny and debate”.


Those were very wise words. The status quo leaves transparency in the hands of big tech companies with a vested interest in opacity. The noble Lord, Lord Knight, mentioned Twitter announcing in February that it would cease allowing free research access to its application programming interface. It is on a whim that a billionaire owner can decide to deny access to researchers.

I much prefer Amendment 233, which would enable Ofcom to appoint an approved independent researcher. The Ofcom code of practice proposed in Amendment 234 would be issued for researchers and platforms, setting out the procedures for enabling access to data. I take the point made by the noble Baroness, Lady Fox, about who should be an independent accredited researcher, but I hope that that is exactly the kind of thing that a code of practice would deal with.

Just as a little contrast, Article 40 of the EU’s Digital Services Act gives access to data to a broad range of researchers—this has been mentioned previously—including civil society and non-profit organisations dedicated to public interest research. The DSA sets out in detail the framework for vetting and access procedures, creating an explicit role for new independent supervisory authorities. This is an example that we could easily follow.

The noble Lord, Lord Bethell, mentioned the whole question of skilled persons. Like him, I do not believe that this measure is adequate as a substitute for what is contained in Amendments 233 and 234. It will be a useful tool for Ofcom to access external expertise on a case-by-case basis but it will not provide for what might be described as a wider ecosystem of inspection and analysis.

The noble Lord also mentioned the fact that internet companies should not regard themselves as an exception. Independent scrutiny is a cornerstone of the pharmaceutical, car, oil, gas and finance industries. They are open to scrutiny from research; we should expect that for social media as well. Independent researchers are already given access in many other circumstances.

The case for these amendments has been made extremely well. I very much hope to see the Government, with the much more open approach that they are demonstrating today, accept the value of these amendments.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

We are watching with interest what is happening in other jurisdictions. If I can furnish the Committee with any information in the area the noble Lord mentions, I will certainly follow up in writing.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

I have a question, in that case, in respect of the jurisdictions. Why should we have weaker powers for our regulator than others?

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

I do not think that we do. We are doing things differently. Of course, Ofcom will be looking at all these matters in its report, and I am sure that Parliament will have an ongoing interest in them. As jurisdictions around the world continue to grapple with these issues, I am sure that your Lordships’ House and Parliament more broadly will want to take note of those developments.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

But surely, there is no backstop power. There is the review but there is no backstop which would come into effect on an Ofcom recommendation, is there?

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

We will know once Ofcom has completed its research and examination of these complex issues; we would not want to pre-judge its conclusions.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

Again, that would require primary legislation.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

With that, if there are no further questions, I invite the noble Lord to withdraw his amendment.

--- Later in debate ---
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I congratulate the noble Baroness on having elucidated this arcane set of amendments. Unfortunately, though, it makes me deeply suspicious when I see what the amendments seem to do. I am not entirely clear about whether we are returning to some kind of merits-based appeal. If so, since the main litigators are going to be the social media companies, it will operate for their benefit to reopen every single thing that they possibly can on the basis of the original evidence that was taken into account by Ofcom, as opposed to doing it on a JR basis. It makes me feel quite uncomfortable if it is for their benefit, because I suspect it is not going to be for the ordinary user who has been disadvantaged by a social media company. I hope our brand spanking new independent complaints system—which the Minister will no doubt assure us is well on the way—will deal with that, but this strikes me as going a little too far.

Baroness Kidron Portrait Baroness Kidron (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I enter the fray with some trepidation. In a briefing, Carnegie, which we all love and respect, and which has been fantastic in the background in Committee days, shared some concerns. As I interpret its concerns, when Ofcom was created in 2003 its decisions could be appealed on their merits, as the noble Lord has just suggested, to the Competition Appeal Tribunal, and I believe that this was seen as a balancing measure against an untested regime. What followed was that the broad basis on which appeal was allowed led to Ofcom defending 10 appeals per year, which really frustrated its ability as a regulator to take timely decisions. It turned out that the appeals against Ofcom made up more than 80% of the workload of the Competition Appeal Tribunal, whose work was supposed to cover a whole gamut of matters. When there was a consultation in the fringes of the DEA, it was decided to restrict appeal to judicial review and appeal on process. I just want to make sure that we are not opening up a huge and unnecessary delaying tactic.

--- Later in debate ---
Moved by
264A: Clause 160, page 138, line 10, at end insert “including (but not necessarily) by making use of a stolen identity, credit card or national insurance number,”
Member’s explanatory statement
This amendment, together with the amendment to page 138, line 12 to which Lord Clement-Jones has added his name, seeks to probe the creation of a specific criminal offence of identity theft.
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, even by the standards of this Bill, this is a pretty diverse group of amendments. I am leading the line with an amendment that does not necessarily fit with much of the rest of the group, except for Amendment 266, which the noble Baroness, Lady Buscombe, will be speaking to. I look forward to hearing her speak.

This amendment is designed to probe the creation of a new offence of identity theft in Clause 160. As I argued in my evidence to the consultation on digital identity and attributes in 2021, a new offence of identity theft is required. Under the Fraud Act 2006, the Identity Documents Act 2010, the Forgery and Counterfeiting Act 1981, the Computer Misuse Act 1990 and the Data Protection Act 2018 there are currently the offences of fraud using a false identity, document theft, forging an identity, unauthorised computer access and data protection offences respectively, but no specific crime of digital identity theft.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

Yes, we will ensure that, in looking at this in the context of Scots law, we have the opportunity to see what is being done there and that we are satisfied that all the scenarios are covered. In relation to the noble Baroness’s Amendment 268, the intentional encouragement or assistance of a criminal offence is already captured under Sections 44 to 46 of the Serious Crime Act 2007, so I hope that that satisfies her that that element is covered—but we will certainly look at all of this.

I turn to government Amendment 268AZA, which introduces the new serious self-harm offence, and Amendments 268AZB and 268AZC, tabled by the noble Lords, Lord Allan and Lord Clement-Jones. The Government recognise that there is a gap in the law in relation to the encouragement of non-fatal self-harm. The new offence will apply to anyone carrying out an act which intends to, and is capable of, encouraging or assisting another person seriously to self-harm by means of verbal or electronic communications, publications or correspondence.

I say to the noble Baroness, Lady Finlay of Llandaff, that the new clause inserted by Amendment 268AZA is clear that, when a person sends or publishes a communication that is an offence, it is also clear that, when a person forwards on another person’s communication, that will be an offence too. The new offence will capture only the most serious behaviour and avoid criminalising vulnerable people who share their experiences of self-harm. The preparation of these clauses was informed by extensive consultation with interested groups and campaign bodies. The new offence includes two key elements that constrain the offence to the most culpable offending; namely, that a person’s act must be intended to encourage or assist the serious self-harm of another person and that serious self-harm should amount to grievous bodily harm. If a person does not intend to encourage or assist serious self-harm, as will likely be the case with recovery and supportive material, no offence will be committed. The Law Commission looked at this issue carefully, following evidence from the Samaritans and others, and the implementation will be informed by an ongoing consultation as well.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

I am sorry to interrupt the Minister, but the Law Commission recommended that the DPP’s consent should be required. The case that the Minister has made on previous occasions in some of the consultations that he has had with us is that this offence that the Government have proposed is different from the Law Commission one, and that is why they have not included the DPP’s consent. I am rather baffled by that, because the Law Commission was talking about a high threshold in the first place, and the Minister is talking about a high threshold of intent. Even if he cannot do so now, it would be extremely helpful to tie that down. As the noble Baroness and my noble friend said, 130 organisations are really concerned about the impact of this.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

The Law Commission recommended that the consent, but not the personal consent, of the Director of Public Prosecutions should be required. We believe, however, that, because the offence already has tight parameters due to the requirement for an intention to cause serious self-harm amounting to grievous bodily harm, as I have just outlined, an additional safeguard of obtaining the personal consent of the Director of Public Prosecutions is not necessary. We would expect the usual prosecutorial discretion and guidance to provide sufficient safeguards against inappropriate prosecutions in this area. As I say, we will continue to engage with those groups that have helped to inform the drafting of these clauses as they are implemented to make sure that that assessment is indeed borne out.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

If my noble friend will forgive me, I had better refresh my memory of what he said—it was some time ago—and follow up in writing.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I will be extremely brief. There is much to chew on in the Minister’s speech and this was a very useful debate. Some of us will be happier than others; the noble Baroness, Lady Buscombe, will no doubt look forward to the digital markets Bill and I will just have to keep pressing the Minister on the Data Protection and Digital Information Bill.

There is a fundamental misunderstanding about digital identity theft. It will not necessarily always be fraud that is demonstrated—the very theft of the identity is designed to be the crime, and it is not covered by the Fraud Act 2006. I am delighted that the Minister has agreed to talk further with the noble Baroness, Lady Kennedy, because that is a really important area. I am not sure that my noble friend will be that happy with the response, but he will no doubt follow up with the Minister on his amendments.

The Minister made a very clear statement on the substantive aspect of the group, the new crime of encouraging self-harm, but further clarification is still needed. We will look very carefully at what he said in relation to what the Law Commission recommended, because it is really important that we get this right. I know that the Minister will talk further with the noble Baroness, Lady Finlay, who is very well versed in this area. In the meantime, I beg leave to withdraw my amendment.

Amendment 264A withdrawn.
--- Later in debate ---
Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- Hansard - - - Excerpts

My Lords, that was a bravura performance by the noble Lord, Lord Lexden. We thank him. To those listening in the Public Gallery, I should say that we debated most of those; it was not quite as on the nod as it looked.

Amendment 286ZA, in the name of my noble friend Lord Stevenson, seeks to address a critical issue in our digital landscape: the labelling of AI-generated content on social media platforms.

As we navigate the ever-evolving world of technology, it is crucial that we uphold a transparency safeguarding the principles of honesty and accountability. Social media has become an integral part of our lives, shaping public discourse, disseminating information and influencing public opinion. However, the rise of AI-powered algorithms and tools has given rise to a new challenge: an increasing amount of content generated by artificial intelligence without explicit disclosure.

We live in an age where AI is capable of creating incredibly realistic text, images and even videos that can be virtually indistinguishable from those generated by humans. While this advancement holds immense potential, it also raises concerns regarding authenticity, trust and the ethical implications of AI-generated content. The proposed amendment seeks to address this concern by advocating for a simple but powerful solution—labelling AI-generated content as such. By clearly distinguishing human-generated content from AI-generated content, we empower individuals to make informed decisions about the information they consume, promoting transparency and reducing the potential for misinformation or manipulation.

Labelling AI-generated content serves several crucial purposes. First and foremost, it allows individuals to differentiate between information created by humans and that generated by algorithms in an era where misinformation and deep fakes pose a significant threat to public trust. Such labelling becomes a vital tool to protect and promote digital literacy.

Secondly, it enables users to better understand the potential biases and limitations of AI-generated content. AI algorithms are trained on vast datasets, and without labelling, individuals might unknowingly attribute undue credibility to AI-generated information, assuming it to be wholly objective and reliable. Labelling, however, helps users to recognise the context and provides an opportunity for critical evaluation.

Furthermore, labelling AI-generated content encourages responsible behaviour from the platforms themselves. It incentivises social media companies to develop and implement AI technologies with integrity and transparency, ensuring that users are aware of the presence and influence of AI in their online experiences.

Some may argue that labelling AI-generated content is an unnecessary burden or that it could stifle innovation. However, the intention behind this amendment is not to impede progress but to foster a healthier digital ecosystem built on trust, integrity and informed decision-making. By promoting transparency, we can strike a balance that allows innovation to flourish while safeguarding the interests of individuals and society as a whole.

In conclusion, the amendment to label AI-generated content on social media platforms represents a crucial step forward in addressing the challenges of the digital age. By embracing transparency and empowering individuals, we can foster a more informed and discerning society. Let us lead by example and advocate for a digital landscape that values accountability, integrity and the rights of individuals. I urge your Lordships to support this amendment as we strive to build a future where technology works hand-in-hand with humanity for the betterment of all.

In the spirit of the amendment, I must flag that my entire speaking note was generated by AI, as the noble Lord, Lord Allan, from his expression, had clearly guessed. In using this tool, I do so not to belittle the amendment but to illustrate that these tools are already infiltrating everyday life and can supercharge misinformation. We need to do something to ease internet users in trusting what they read.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

Does the noble Lord agree that the fact that we did not notice his speech was generated by AI somewhat damages his argument?

Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- Hansard - - - Excerpts

The fact that I labelled it as being AI-generated helped your Lordships to understand, and the transparency eases the debate. I beg to move.

--- Later in debate ---
Baroness Bennett of Manor Castle Portrait Baroness Bennett of Manor Castle (GP)
- View Speech - Hansard - - - Excerpts

My Lords, it is a pleasure to follow the noble Lord, Lord Allan. He reminded me of significant reports of the huge amount of exploitation in the digital sector that has come from identification of photos. A great deal of that is human labour, even though it is often claimed to have been done through machine intelligence.

In speaking to this late but important amendment, I thank the noble Lords, Lord Stevenson and Lord Knight, for giving us the chance to do so, because, as every speaker has said, this is really important. I should declare my position as a former newspaper editor. I distinctly recall teasing a sports journalist in the early 1990s when it was reported that journalists were going to be replaced by computer technology. I said that the sports journalists would be the first to go because they just wrote to a formula anyway. I apologise to sports journalists everywhere.

The serious point behind that is that a lot of extreme, high claims are now being made about so-called artificial intelligence. I declare myself an artificial-intelligence sceptic. What we have now—so-called generative AI—is essentially big data. To quote the science fiction writer, Ted Chiang, what we have is applied statistics. Generative AI relies on looking at what already exists, and it cannot produce anything original. In many respects, it is a giant plagiarism machine. There are huge issues, beyond the scope of the Bill, around intellectual property and the fact that it is not generating anything original.

None the less, it is generating what people in the sector like to describe as hallucinations, which might otherwise be described as errors, falsehoods or lies. This is where quotes are made up; ideas are presented which, at first glance, look as though they make sense but fall apart under examination; and data is actively invented. There is one rather famous case where a lawyer got himself into a great deal of trouble by producing a whole lot of entirely false cases that a bot generated for him. We need to be really careful, and this amendment shows us a way forward in attempting to deal with some of the issues we are facing.

To pick up the points made by the noble Lord, Lord Allan, about the real-world impacts, I was at an event in Parliament this week entitled “The Worker Experience of the AI Revolution”, run by the TUC and Connected by Data. It highlighted what has happened with a lot of the big data exercises already in operation: rather than humans being replaced by robots, people are being forced to act like robots. We heard from Royal Mail and Amazon workers, who are monitored closely and expected to act like machines. That is just one example of the unexpected outcomes of the technologies we have been exercising in recent years.

I will make two final comments. First, I refer to 19th-century Luddite John Booth, who was tortured to death by the state. He was a Luddite, but he was also on the record as saying that new machinery

“might be man’s chief blessing instead of his curse if society were differently constituted”.

History is not pre-written; it is made by the choices, laws and decisions we make in this Parliament. Given where we are at the moment with so-called AI, I urge that caution really is warranted. We should think about putting some caution in the Bill, which is what this amendment points us towards.

My final point relates to an amendment I was not allowed to table because, I was told, it was out of scope. It asked the Secretary of State to report on the climate emissions coming from the digital sector, specifically from artificial intelligence. The noble Baroness, Lady Kidron, said that it will operate on a vast scale. I point out that, already, the digital sector is responsible for 3% of the world’s electricity use and 2% of the world’s carbon emissions, which is about the same as the airline sector. We really need to think about caution. I very much agree with everyone who said that we need to have more discussions on all these issues before Report.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, this is a real hit-and-run operation from the noble Lord, Lord Stevenson. He has put down an amendment on my favourite subject in the last knockings of the Bill. It is totally impossible to deal with this now—I have been thinking and talking about the whole area of AI governance and ethics for the past seven years—so I am not going to try. It is important, and the advisory committee under Clause 139 should take it into account. Actually, this is much more a question of authenticity and verification than of content. Trying to work out whether something is ChatGPT or GPT-4 content is a hopeless task; you are much more likely to be able to identify whether these are automated users such as chatbots than you are to know about the content itself.

I will leave it there. I missed the future-proofing debate, which I would have loved to have been part of. I look forward to further debates with the noble Viscount, Lord Camrose, on the deficiencies in the White Paper and to the Prime Minister’s much more muscular approach to AI regulation in future.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- View Speech - Hansard - - - Excerpts

I am sure that the noble Lord, Lord Stevenson of Balmacara, is smiling over a sherry somewhere about the debate he has facilitated. His is a useful probing amendment and we have had a useful discussion.

The Government certainly recognise the potential challenges posed by artificial intelligence and digitally manipulated content such as deepfakes. As we have heard in previous debates, the Bill ensures that machine-generated content on user-to-user services created by automated tools or machine bots will be regulated where appropriate. Clause 49(4)(b) means that machine-generated content is regulated unless the bot or automated tool producing the content is controlled by the provider of the service.

The labelling of this content via draft legislation is not something to which I can commit today. The Government’s AI regulation White Paper sets out the principles for the responsible development of artificial intelligence in the UK. These principles, such as safety, transparency and accountability, are at the heart of our approach to ensuring the responsible development and use of AI. As set out in the White Paper, we are building an agile approach that is designed to be adaptable in response to emerging developments. We do not wish to introduce a rigid, inflexible form of legislation for what is a flexible and fast-moving technology.

The public consultation on these proposals closed yesterday so I cannot pre-empt our response to it. The Government’s response will provide an update. I am joined on the Front Bench by the Minister for Artificial Intelligence and Intellectual Property, who is happy to meet with the noble Baroness, Lady Kidron, and others before the next stage of the Bill if they wish.

Beyond labelling such content, I can say a bit to make it clear how the Bill will address the risks coming from machine-generated content. The Bill already deals with many of the most serious and illegal forms of manipulated media, including deepfakes, when they fall within scope of services’ safety duties regarding illegal content or content that is potentially harmful to children. Ofcom will recommend measures in its code of practice to tackle such content, which could include labelling where appropriate. In addition, the intimate image abuse amendments that the Government will bring forward will make it a criminal offence to send deepfake images.

In addition to ensuring that companies take action to keep users safe online, we are taking steps to empower users with the skills they need to make safer choices through our work on media literacy. Ofcom, for example, has an ambitious programme of work through which it is funding several initiatives to build people’s resilience to harm online, including initiatives designed to equip people with the skills to identify disinformation. We are keen to continue our discussions with noble Lords on media literacy and will keep an open mind on how it might be a tool for raising awareness of the threats of disinformation and inauthentic content.

With gratitude to the noble Lords, Lord Stevenson and Lord Knight, and everyone else, I hope that the noble Lord, Lord Knight, will be content to withdraw his noble friend’s amendment.

--- Later in debate ---
Baroness Kidron Portrait Baroness Kidron (CB)
- View Speech - Hansard - - - Excerpts

My Lords, we already had a long debate on this subject earlier in Committee. In the interim, many noble Lords associated with these amendments have had conversations with the Government, which I hope will bear some fruit before Report. Today, I want to reiterate a few points that I hope are clarifying to the Committee and the department. In the interests of everyone’s evening plans, the noble Lord, Lord Bethell, and the noble Baroness, Lady Harding, wish to associate themselves with these remarks so that they represent us in our entirety.

For many years, we thought age verification was a gold standard, primarily because it involved a specific government-issued piece of information such as a passport. By the same token, we thought age estimation was a lesser beast, given that it is an estimate by its very nature and that the sector primarily relied on self-declarations with very few checks and balances. In recent years, many approaches to age checking have flourished. Some companies provide age assurance tokens based on facial recognition; others use multiple signals of behaviour, friendship group, parental controls and how you move your body in gameplay; and, only yesterday, I saw the very impressive on-device privacy-preserving age-verification system that Apple rolled out in the US two weeks ago. All of these approaches, used individually and cumulatively, have a place in the age-checking ecosystem, and all will become more seamless over time. But we must ensure that, when they are used, they are adequate for the task they are performing and are quality controlled so that they do not share information about a child, are secure and are effective.

That is why, at the heart of the package of measures put forward in my name and that of the noble Lords, Lord Stevenson and Lord Bethell, and the right reverend Prelate the Bishop of Oxford, are two concepts. First, the method of measuring age should be tech neutral so that all roads can be used. Secondly, there must be robust mechanism of measurement of effectiveness so that only effective systems can be used in high-risk situations, particularly those of primary priority harms such as self-harm and pornography, and that such a measurement will be determined by Ofcom, not industry.

From my work over the last decade and from recent discussion with industry, I am certain that any regime of age assurance must be measurable and hold to certain principles. We cannot create a situation where children’s data is loosely held and liberally shared; we cannot have a system that discriminates against, or does not have automatic appeal mechanisms for, children of colour or those who are 17 or 19, who are at most likelihood of error. Systems should aim to be interoperable and private, not leave traces as children go from one service to another.

Each of the principles of our age-verification package set out in the schedule are of crucial importance. I hope that the Government will see the sense in that because, without them, this age checking will not be trusted. Equally, I urge the Committee to embrace the duality of age verification and estimation that the Government have put forward, because, if a child uses an older sibling’s form of verification and a company understands through the child’s behaviour that they are indeed a child, then we do not want to set up a perverse situation in which the verification is considered of a higher order and they cannot take action based on estimation; ditto, if estimation in gameplay is more accurate than tokens that verify whether someone is over or under 18, it may well be that estimation gives greater assurance that the company will treat the child according to their age.

I hope and believe that, in his response, the Minister will confirm that definitions of age assurance and age estimation will be on the face of the Bill. I also urge him to make a generous promise to accept the full gamut of our concerns about age checking and bring forward amendments in his name on Report that reflect them in full. I beg to move.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I associate these Benches with the introduction by the noble Baroness, Lady Kidron, support her amendments and, likewise, hope that they form part of the package that is trundling on its way towards us.

Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, what more can I say than that I wish to be associated with the comments made by the noble Baroness and then by the noble Lord, Lord Clement-Jones? I look forward to the Minister’s reply.