Data (Use and Access) Bill [HL] Debate
Full Debate: Read Full DebateLord Holmes of Richmond
Main Page: Lord Holmes of Richmond (Conservative - Life peer)Department Debates - View all Lord Holmes of Richmond's debates with the Department for Business and Trade
(1 week, 4 days ago)
Grand CommitteeMy Lords, in speaking to Amendment 137 in my name I thank the noble Baroness, Lady Harding, the noble Lord, Lord Stevenson, and my noble friend Lord Russell for their support. I also add my enthusiastic support to the amendments in the name of my noble friend Lord Colville.
This is the same amendment that I laid to the DPDI Bill, which at the time had the support of the Labour Party. I will not labour that point, but it is consistently disappointing that these things have gone into the “too difficult” box.
Amendment 137 would introduce a code of practice on children and AI. AI drives the recommender systems that determine all aspects of a child’s digital experience, including the videos they watch, their learning opportunities, the people they follow and the products they buy—and, as reported last weekend, AI is even helping farmers pick the ripest tomatoes for baked beans. But it no longer concerns simply the elective parts of life where, arguably, a child or a parent on their behalf can choose to avoid certain products and services. AI is invisibly and ubiquitously present in all areas of their lives, and its advances and impact are particularly evident in the education and health sectors, the first of which is compulsory for children and the second of which is necessary for all of us.
The amendment has three parts. The first requires the ICO to create a code and sets out the expectations of its scope; the second considers who and what should be consulted and considered, including experts, children, and the frameworks that codify children’s existing rights; and the third part defines elements of the process, including risk assessment definitions, and sets out the principles to which the code must adhere.
When we debated this before, I anticipated that the Minister would say that the ICO had already published guidance, that we do not want to exclude children from the benefits of AI, and that we must not get in the way of innovation. Given that the new Government have taken so many cues from the previous one, I am afraid I anticipate a similar response.
I first point out, therefore, that the ICO’s non-binding guidance on AI and data protection is insufficient. It has only a single mention of a child in its 140 pages, which is a case study about child benefits. In the hundreds of pages of guidance, toolkits and sector information, nowhere are the specific needs and rights, or development vulnerabilities, of children comprehensively addressed in relation to AI. This absence of children is also mirrored in government publications on AI. Of course, we all want children to enjoy the benefits of AI, but consideration of their needs would increase the likelihood of those benefits. Moreover, it seems reckless and unprincipled not to protect them from known harms. Surely the last three decades of tech development have shown us that the experiment of a “build first, worry about the kids later—or never” approach has cost our children dearly.
Innovation is welcome but not all innovation is equal. We have bots offering 13 year-olds advice on how to seduce grown men, or encouraging them to take their own lives, edtech products that profile children to unfair and biased outcomes that limit their education and life chances, and we have gen AI that perpetuates negative, racist, misogynist and homophobic stereotypes. Earlier this month, the Guardian reported a deep bias in the AI used by the Department for Work and Pensions. This “hurt first, fix later” approach creates a lack of trust, increases unfairness, and has real-world consequences. Is it too much to insist that we ask better questions of systems that may result in children going hungry?
Why children? I am saddened that I must explain this, but from our deeply upsetting debate last week on the child protection amendments, in which the Government asserted that children are already catered for while deliberately downgrading their protections, it seems that the Government or their advisers have forgotten.
Children are different for three reasons. First, as has been established over decades, children are on a development journey. There are ages and stages at which children are developmentally able to do certain things, such as walk, talk, understand risk and irony and learn different social skills. There are equally ages and stages at which they cannot do those things. The long-established consensus is that families, social groups and society more broadly, including government, step in to support them on this journey. Secondly, children have less voice and less choice about how and where they spend their time, so the places and spaces they inhabit have to be designed to be fit for childhood. Thirdly, we have a responsibility towards children that extends even beyond our responsibility to each other. This means that we cannot legitimatise profit at their expense. Allowing systems to play in the wild in the name of growth and innovation, leaving kids to pay the price, is a low bar.
It is worth noting that since we debated it, a proposal for this AI code for children that follows the full life cycle of development, deployment, use and retirement of AI systems has been drafted and has the support of multiple expert organisations and individuals around the globe. I am sure that all nations and intergovernmental organisations will have additional inputs and requirements, but it is worth saying that the proposed code, which was written with input from academics, computer scientists, lawyers, engineers and children’s rights activists, is mindful of and compatible with the EU AI Act, the White House Blueprint for an AI Bill of Rights, the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, the Council of Europe’s Framework Convention on Artificial Intelligence and, of course, the UNCRC general comment no. 25.
This proposal will be launched early next year as an indication of what could and should be done. Unless the Government find their compass vis-à-vis children and tech, I suspect that another jurisdiction will adopt it ahead of the UK, making that the go-to destination for trusted tech development for child-safe products. It is perhaps worth reminding the Committee that one in three connected people is under 18, which is roughly 1 billion children. As the demographics change, the proportion and number of children will rise. It is a huge financial market.
Before I sit down, I shall briefly talk about the AADC because sometimes Ministers say that we already have a children’s code. The age-appropriate design code covers only ISS, which automatically limits it, and even the ICO by now agrees that its enforcement record is neither extensive nor impressive. It does not clearly cover the urgent area of edtech, which is the subject of another amendment, and, most pertinently to this amendment, it addresses AI profiling only, which means that it is limited in how it can look at the new and emerging challenges of generative AI. A revamp of the AADC to tackle the barriers of enforcement, account for technological advances, cover all products and services likely to be accessed by children and make our data regime AI-sensitive would be welcome, but rather than calling for a strengthening of the AADC, the ICO agreed to the downgrading of children’s data protection in the DPDI Bill and, again, has agreed to the downgrading of protections in the current Bill on ADM, scientific research, onward processing and so on. A stand-alone code for AI development is required because in this way we could be sure that children are in the minds of developers at the outset.
It is disappointing that the UK is failing to claim its place as the centre of regulated and trusted innovation. Although we are promised an AI Bill, the Government repeatedly talk of large frontier companies. AI is in every part of a child’s life from the news they read to the prices they pay for travel and goods. It is clear from previous groups that many colleagues feel that a data Bill with no AI provisions is dangerous commercially and for the communities of the UK. An AI Bill with no consideration of the daily impact on children may be a very poor next choice. Will the Minister say why a Labour Government are willing to abandon children to technology rather than building technology that anticipates children’s rights and needs?
My Lords, it is a pleasure to follow my friend the noble Baroness, Lady Kidron, and to give full-throated support to my friend the noble Viscount, Lord Colville, on all his amendments. Given that the noble Baroness mentioned it and that another week has passed since we asked the Minister the question, will we see an AI Bill or a consultation before Santa comes or at some stage in the new year? I support all the amendments in this group and in doing so, as it is the first time I have spoken today in Committee, I declare my technology interests as set out in the register, not least as an adviser to Socially Recruited, an AI business.
I will speak particularly to my Amendment 211A. I have put down “image, likeness and personality” not because I believe that they stand as the most important rights that are being transgressed or that they are the most important rights which we should consider; I have put them down to give a specific focus on them because, right now, they are being largely cut across and ignored, so that all of our creatives find themselves with their works, but also image, likeness and personality, disappearing into these largely foundation AI models with no potential for redress.
Once parts of you such as your name, face or voice have been ingested, as the noble Lord, Lord Clement-Jones, said in the previous group, it is difficult then to have them extracted from the model. There is no sense, for example, of seeking an equitable remedy to put one back in the situation had the breach not occurred. It is almost “once in, forever in”, then works start to be created based on those factors, features and likenesses, which compete directly with the creatives. This is already particularly prevalent in the music industry.
What plans do the Government have in terms of personality rights, image and likeness? Are they content with the current situation where there is no protection for our great creatives, not least in the music industry? What does the Bill do for our creatives? I go back to the point made by the noble Baroness, Lady Kidron. How can we have all these debates on a data Bill which is silent when it comes to AI, and a product regulation Bill where AI is specifically excluded, and yet have no AI Bill on the near horizon—unless the Minister can give us some up-to-date information this afternoon? I look forward to hearing from her.
My Lords, I should first apologise for not being able to attend Second Reading or, arguably more importantly, to be in Committee last week to support the many amendments of the noble Baroness, Lady Kidron, on child protection. I read Hansard carefully and was deeply depressed to see that we were once again needing to rehearse, as she has done again today, the importance of protecting children in the digital era. It seems to be our lot that there is a group of us who keep coming back. We play the merry-go-round and sit in different places; it is a privilege to sit next to the noble Baroness, Lady Kidron, for the first time in the decade that I have been in the House. I support her Amendment 137. She has given a good exposé as to why we should think really carefully about how we protect children in this AI world. I would just like to add one point about AI itself.
We keep being told—in a good way—that AI is an underlying and general-purpose technology. That means we need to properly establish the principles with which we should protect children there. We know that technology is morally neutral; it is the human beings who do the damage. In every other underlying, breakthrough technology, we have learned that we have needed to protect the most vulnerable, whether it was electricity when it first went into factories, toys when they were first distributed on the mass market, or social media, with the age-appropriate design code. I feel that it would be a huge mistake, on the third Bill where many of us have debated this subject matter, for us not to address the fact that, as of today, this is the biggest breakthrough technology of our lifetime. We should recognise that children will need protecting, as well as having the opportunity to benefit from it.
My Lords, I rise briefly to support my friend, the noble Lord, Lord Clement-Jones, and his string of amendments. He made the case clearly: it is simply about access, the right to redress and a clear pathway to that redress, a more efficient process and clarity and consistency across this part of our data landscape. There is precious little point in having obscure remedies or rights—or even, in some cases, as we have discussed in our debates on previous groups, no right or obvious pathways to redress. I believe that this suite of amendments addresses that issue. Again, I full-throatedly support them.
My Lords, I address the amendments tabled by the noble Lord, Lord Clement-Jones. These proposals aim to transfer jurisdiction from courts to tribunals; to establish a new right of appeal against decisions made by the Information Commissioner; and to grant the Lord Chancellor authority to implement tribunal procedure rules. I understand and recognise the noble Lord’s intent here, of course, but I have reservations about these amendments and urge caution in accepting them.
The suggestion to transfer jurisdiction from courts to tribunals raises substantial concerns. Courts have a long-standing authority and expertise in adjudicating complex legal matters, including data protection cases. By removing these disputes from the purview of the courts, the risk is that we undermine the depth and breadth of legal oversight required in such critical areas. Tribunals, while valuable for specialised and expedited decisions, may not provide the same level of rigorous legal analysis.
Cases such as those cited by the noble Lord, Lord Clement-Jones—Killock and another v the Information Commissioner and Delo v the Information Commissioner—demonstrate to me the intricate interplay between data protection, administrative discretion and broader legal principles. It is questionable whether tribunals, operating under less formal procedures, can consistently handle such complexities without diminishing the quality of justice. Further, I am not sure that the claim that this transfer will streamline the system and reduce burdens on the courts is fully persuasive. Shifting cases to tribunals does not eliminate complexity; it merely reallocates it, potentially at the expense of the detailed scrutiny that these cases demand.
I turn to the right of appeal against the commissioner’s decisions. Although the introduction of a right of appeal against these decisions may seem like a safeguard, it risks creating unnecessary layers of litigation. The ICO already operates within a robust framework of accountability, including judicial review for cases of legal error or improper exercise of discretion. Adding a formal right of appeal risks encouraging vexatious challenges, overwhelming the tribunal system and diverting resources from addressing genuine grievances.
I think we in my party understand the importance of regulatory accountability. However, creating additional mechanisms should not come at the expense of efficiency and proportionality. The existing legal remedies are designed to strike an appropriate balance, and further appeals risk creating a chilling effect on the ICO’s ability to act decisively in protecting data rights.
On tribunal procedure rules and centralised authority, the proposed amendment granting the Lord Chancellor authority to set tribunal procedure rules bypasses the Tribunal Procedure Committee, an independent body designed to ensure that procedural changes are developed with judicial oversight. This move raises concerns about the concentration of power and the erosion of established checks and balances. I am concerned that this is a case of expediency overriding the principles of good governance. While I acknowledge that consultation with the judiciary is included in the amendment, it is not a sufficient substitute for the independent deliberative processes currently in place. The amendment risks undermining the independence of our legal institutions and therefore I have concerns about it.
These amendments overall, while presented as technical fixes, and certainly I recognise the problem and the intent, would have far-reaching consequences for our data protection framework. The vision of my party for governance is one that prioritises stability, legal certainty and the preservation of integrity. We must avoid reforms that, whatever their intent, introduce confusion or inefficiency or undermine public trust in our system. Data protection is, needless to say, a cornerstone of our modern economy and individual rights. As such, any changes to its governance must be approached with the utmost care.
My Lords, my Amendment 115 would similarly act in that way by making automated decision-making processes explain themselves to the people affected by them. This would be a much better way of controlling the quality of what is going on with automated decision-making than restricting that sort of information to professionals—to people who are anyway overworked and have a lot of other things to do. There is no one more interested in the decision of an automated process than the person about whom it is being made. If we are to trust these systems then their ability, which is way beyond the human ability, to have the time to explain why they took the decision they did—which, if the machine is any good, it knows and can easily set out—is surely the way to generate trust: you can absolutely see what decision has been made and why, and you can respond to it.
This would, beyond anything else, produce a much better system for our young people when they apply for their first job. My daughter’s friends in that position are getting into the hundreds of unexplained rejections. This is not a good way to treat young people. It does not help them to improve and understand what is going on. I completely understand why firms do not explain; they have so many applications that they just do not have the time or the personnel to sit down and write a response—but that does not apply to an automated decision-making machine. It could produce a much better situation when it comes to hiring.
As I said, my principal concern, to echo that of the noble Viscount, is that it would give us sight of the decisions that have been taken and why. If it becomes evident that they are taken well and for good reasons, we shall learn to trust them. If it becomes evident that they really are not fair or understandable, we shall be in a position to demand changes.
My Lords, it is a pleasure to take part in the debate on this group. I support the spirit of all the amendments debated thus far.
Speaking of spirits, and it being the season, I have more than a degree of sympathy for the Minister. With so many references to her previous work, this Christmas is turning into a bit of the Ghost of Amendments Past for her. That is good, because all the amendments she put down in the past were of an excellent quality, well thought through, equally considered and even-handed.
As has been mentioned many times, we have had three versions of a data Bill so far over just over three years. One wonders whether all the elements of this current draft have kept up with what has happened in the outside world over those three years, not least when it comes to artificial intelligence. This goes to the heart of the amendments in this group on automated decision-making.
When the first of these data Bills emerged, ADM was present—but relatively discreetly present—in our society and our economy. Now it would be fair to say that it proliferates across many areas of our economy and our society, often in situations where people find themselves at the sharpest end of the economy and the sharpest end of these automated decisions, often without even knowing that ADM was present. More than that, even on the discovery that ADM was in the mix, depending on which sector of the economy or society they find that decision being made in, they may find themselves with no or precious little redress—employment and recruitment, to name but one sector.
It being the season, it is high time when it comes to ADM that we start to talk turkey. In all the comments thus far, we are talking not just about ADM but about the principles that should underpin all elements of artificial intelligence—that is, they should be human led. These technologies should be in our human hands, with our human values feeding into human oversight: human in the loop and indeed, where appropriate, human over the loop.
That goes to elements in my two amendments in this group, Amendments 123A and 123B. Amendment 123A simply posits, through a number of paragraphs, the point that if someone is subject to an automated decision then they have the right to a personalised explanation of that decision. That explanation should be accessible in its being in plain language of their choice, not having a cost attached to it and not being in any sense technically or technologically convoluted or opaque. That would be relatively straightforward to achieve, but the positive impact for all those citizens would certainly be more than material.
Amendment 123B goes to the heart of those humans charged with the delivery of these personalised explanations. It is not enough to simply say that there are individuals within an organisation responsible for the provision of personalised explanations for automated decisions; it is critical that those individuals have the training, the capabilities and, perhaps most importantly, the authority within that organisation to make a meaningful impact regarding those personalised explanations. If not, this measure may have a small voice but would have absolutely no teeth when it comes to the citizen.
In short, ADM is proliferating so we need to ensure that we have a symmetrical situation for citizens, for consumers, and for anyone who finds themselves in any domain or sector of our economy and society. We must assert the principles: human-led, human in the loop, “Our decisions, our data”, and “We determine, we decide, we choose”. That is how I believe we can have an effective, positive, enabling and empowering AI future. I look forward to the Minister’s comments.
My Lords, I shall speak to the series of amendments on automated decision-making to which I have added my name but are mostly in the name of the noble Lord, Lord Clement-Jones. As he said, we had a rehearsal for this debate last Friday when we debated his Private Member’s Bill so I will not delay the Committee by saying much about the generalities of ADMs in the public sector.
Suffice it to say that human involvement in overseeing AIs must be meaningful—for example, without those humans themselves being managed by algorithms. We must ensure that ADMs comply by design with the Equality Act and safeguard data subjects’ other rights and freedoms. As discussed in earlier groups, we must pay particular attention to children’s rights with regard to ADMs, and we must reinforce the obligation on public bodies to use the algorithmic transparency recording standards. I also counsel my noble friend the Minister that, as we have heard, there are many voices from civil society advising me and others that the new Article 22 of the GDPR takes us backwards in terms of protection.
That said, I want to focus on Amendment 123C, relating to ADMs in the workplace, to which I was too late to add my name but would have done. This amendment follows a series of probing amendments tabled by me to the former DPDI Bill. In this, I am informed by my work as the co-chair of the All-Party Parliamentary Group on the Future of Work, assisted by the Institute for the Future of Work. These amendments were also mirrored during the passage of the Procurement Act and competition Act to signal the importance of the workplace, and in particular good work, as a cross-cutting objective and lens for policy orientation.
Does the Minister agree that the crux of this machinery is solely automated decision-making as a binary thing—it is or it is not—and, therefore, that the absolute key to it is making sure that the humans involved are suitably qualified and finding some way to do so, whether by writing a definition or publishing guidelines?
On the question of qualification, the Minister may wish to reflect on the broad discussions we have had in the past around certification and the role it may play. I gently her take her back to what she said on Amendment 123A about notification. Does she see notification as the same as a personalised response to an individual?
Noble Lords have asked several questions. First, in response to the noble Viscount, Lord Camrose, I think I am on the same page as him about binary rather than muddying the water by having degrees of meaningful intervention. The ICO already has guidance on how human review should be provided, and this will be updated after the Bill to ensure that it reflects what is meant by “meaningful human involvement”. Those issues will be addressed in the ICO guidance, but if it helps, I can write further on that.
I have forgotten the question that the noble Lord, Lord Holmes, asked me. I do not know whether I have addressed it.
In her response the Minister said “notification”. Does she see notification as the same as “personalised response”?
My understanding is that it would be. Every individual who was affected would receive their own notification rather than it just being on a website, for example.
Let me just make sure I have not missed anyone out. On Amendment 123B on addressing bias in automated decision-making, compliance with the data protection principles, including accuracy, transparency and fairness, will ensure that organisations take the necessary measures to address the risk of bias.
On Amendment 123C from the noble Lord, Lord Clement-Jones, I reassure him that the Government strongly agree that employment rights should be fit for a modern economy. The plan to make work pay will achieve this by addressing the challenges introduced by new trends and technologies. I agree very much with my noble friend Lord Knight that although we have to get this right, there are opportunities for a different form of work, and we should not just see this as being potentially a negative impact on people’s lives. However, we want to get the balance right with regard to the impact on individuals to make sure that we get the best rather than the possible negative effects out of it.
Employment rights law is more suitable for regulating the specific use of data and technology in the workplace rather than data protection law in isolation, as data protection law sets out general rules and principles for processing that apply in all contexts. Noble Lords can rest assured that we take the impact on employment and work very seriously, and as part of our plan to make work pay and the Employment Rights Bill, we will return to these issues.
On Amendments 119, 120, 121 and 122, tabled by the noble Lord, Lord Clement-Jones, the noble Viscount, Lord Colville, and my noble friend Lord Knight, the Government share the noble Lords’ belief in the importance of public sector algorithmic transparency, and, as the noble Lord, Lord Clement-Jones, reminded us, we had a very good debate on this last week. The algorithmic transparency recording standard is already mandatory for government departments and arm’s-length bodies. This is a cross-government policy mandate underpinned by digital spend controls, which means that when budget is requested for a relevant tool, the team in question must commit to publishing an ATRS record before receiving the funds.
As I said on Friday, we are implementing this policy accordingly, and I hope to publish further records imminently. I very much hope that when noble Lords see what I hope will be a significant number of new records on this, they will be reassured that the nature of the mandation and the obligation on public sector departments is working.
Policy routes also enable us to provide detailed guidance to the public sector on how to carry out its responsibilities and monitor compliance. Examples include the data ethics framework, the generative AI framework, and the guidelines for AI procurement. Additionally, the data protection framework already achieves some of the intended outcomes of these amendments. It requires organisations, including public authorities, to demonstrate how they have identified and mitigated risks when processing personal data. The ICO provides guidance on how organisations can audit their privacy management and ensure a high level of data protection compliance.
I know I have given a great deal of detail there. If I have not covered all the points that the noble Lords have raised, I will write. In the meantime, given the above assurances, I hope that the noble Lord will withdraw his amendment.