Data (Use and Access) Bill [HL] Debate
Full Debate: Read Full DebateBaroness Kidron
Main Page: Baroness Kidron (Crossbench - Life peer)Department Debates - View all Baroness Kidron's debates with the Department for Science, Innovation & Technology
(1 day, 15 hours ago)
Lords ChamberMy Lords, I support what the noble Baroness, Lady Freeman, said. Her maiden speech was a forewarning of how good her subsequent speeches would be and how dedicated she is to openness, which is absolutely crucial in this area. We are going to have to get used to a lot of automatic processes and come to consider that they are by and large fair. Unless we are able to challenge it, understand it and see that it has been properly looked after, we are not going to develop that degree of trust in it.
Anyone who has used current AI programs will know about the capacity of AI for hallucination. The noble Lord, Lord Clement-Jones, uses them a lot. I have been looking, with the noble Lord, Lord Saatchi, at how we could use them in this House to deal with the huge information flows we have and to help us understand the depths of some of the bigger problems and challenges we are asked to get a grip on. But AI can just invent things, leaping at an answer that is easier to find, ignoring two-thirds of the evidence and not understanding the difference between reliable and unreliable witnesses.
There is so much potential, but there is so much that needs to be done to make AI something we can comfortably rely on. The only way to get there is to be absolutely open and allow and encourage challenge. The direction pointed out by the noble Lord, Lord Clement-Jones, and, most particularly by the noble Baroness, Lady Freeman, is one that I very much think we should follow.
My Lords, I will very briefly speak to Amendment 30 in my name. Curiously, it was in the name of the noble Viscount, Lord Camrose, in Committee, but somehow it has jumped.
On the whole, I have always advocated for age-appropriate solutions. The amendment refers to preventing children consenting to special category data being used in automated decision-making, simply because there are some things that children should not be able to consent to.
I am not sure that this exact amendment is the answer. I hope that the previous conversation that we had before the dinner break will produce some thought about this issue—about how automatic decision-making affects children specifically—and we can deal with it in a slightly different way.
While I am on my feet, I want to say that I was very struck by the words of my noble friend Lady Freeman, particularly about efficacy. I have seen so many things that have purported to work in clinical conditions that have failed to work in the complexity of real life, and I want to associate myself with her words and, indeed, the amendments in her name and that of the noble Lord, Lord Clement-Jones.
I start with Amendment 26, tabled by the noble Viscount, Lord Camrose. As he said in Committee, a principles-based approach ensures that our rules remain fit in the face of fast-evolving technologies by avoiding being overly prescriptive. The data protection framework achieves this by requiring organisations to apply data protection principles when personal data is processed, regardless of the technology used.
I agree with the principles that are present for AI, which are useful in the context in which they were put together, but introducing separate principles for AI could cause confusion around how data protection principles are interpreted when using other technologies. I note the comment that there is a significant overlap between the principles, and the comment from the noble Viscount that there are situations in which one would catch things and another would not. I am unable to see what those particular examples are, and I hope that the noble Viscount will agree with the Government’s rationale for seeking to protect the framework’s technology-neutral set of principles, rather than having two separate sets.
Amendment 28 from the noble Lord, Lord Clement-Jones, would extend the existing safeguards for decisions based on solely automated processing to decisions based on predominantly automated processing. These safeguards protect people when there is no meaningful human involvement in the decision-making. The introduction of predominantly automated decision-making, which already includes meaningful human involvement—and I shall say a bit more about that in a minute—could create uncertainty over when the safeguards are required. This may deter controllers from using automated systems that have significant benefits for individuals and society at large. However, the Government agree with the noble Viscount on strengthening the protections for individuals, which is why we have introduced a definition for solely automated decision-making as one which lacks “meaningful human involvement”.
I thank noble Lords for Amendments 29 and 36 and the important points raised in Committee on the definition of “meaningful human involvement”. This terminology, introduced in the Bill, goes beyond the current UK GDPR wording to prevent cursory human involvement being used to rubber stamp decisions as not being solely automated. The point at which human involvement becomes meaningful is context specific, which is why we have not sought to be prescriptive in the Bill. The ICO sets out in its guidance its interpretation that meaningful human involvement must be active: someone must review the decision and have the discretion to alter it before the decision is applied. The Government’s introduction of “meaningful” into primary legislation does not change this definition, and we are supportive of the ICO’s guidance in this space.
As such, the Government agree on the importance of the ICO continuing to provide its views on the interpretation of terms used in the legislation. Our reforms do not remove the ICO’s ability to do this, or to advise Parliament or the Government if it considers that the law needs clarification. The Government also acknowledge that there may be a need to provide further legal certainty in future. That is why there are a number of regulation-making powers in Article 22D, including the power to describe meaningful human involvement or to add additional safeguards. These could be used, for example, to impose a timeline on controllers to provide human intervention upon the request of the data subject, if evidence suggested that this was not happening in a timely manner following implementation of these reforms. Any regulations must follow consultation with the ICO.
Amendment 30 from the noble Baroness, Lady Kidron, would prevent law enforcement agencies seeking the consent of a young person to the processing of their special category or sensitive personal data when using automated decision-making. I thank her for this amendment and agree about the importance of protecting the sensitive personal data of children and young adults. We believe that automated decision-making will continue to be rarely deployed in the context of law enforcement decision-making as a whole.
Likewise, consent is rarely used as a lawful basis for processing by law enforcement agencies, which are far more likely to process personal data for the performance of a task, such as questioning a suspect or gathering evidence, as part of a law enforcement process. Where consent is needed—for example, when asking a victim for fingerprints or something else—noble Lords will be aware that Clause 69 clearly defines consent under the law enforcement regime as
“freely given, specific, informed and unambiguous”
and
“as easy … to withdraw … as to give”.
So the tight restrictions on its use will be crystal clear to law enforcement agencies. In summary, I believe the taking of an automated decision based on a young person’s sensitive personal data, processed with their consent, to be an extremely rare scenario. Even when it happens, the safeguards that apply to all sensitive processing will still apply.
I thank the noble Viscount, Lord Camrose, for Amendments 31 and 32. Amendment 31 would require the Secretary of State to publish guidance specifying how law enforcement agencies should go about obtaining the consent of the data subject to process their data. To reiterate a point made by my noble friend Lady Jones in Committee, Clause 69 already provides a definition of “consent” and sets out the conditions for its use; they apply to all processing under the law enforcement regime, not just automated decision-making, so the Government believe this amendment is unnecessary.
Amendment 32 would require the person reviewing an automated decision to have sufficient competence and authority to amend the decision if required. In Committee, the noble Viscount also expressed the view that a person should be “suitably qualified”. Of course, I agree with him on that. However, as my noble friend Lady Jones said in Committee, the Information Commissioner’s Office has already issued guidance which makes it clear that the individual who reconsiders an automated decision must have the “authority and competence” to change it. Consequently, the Government do not feel that it is necessary to add further restrictions in the Bill as to the type of person who can carry out such a review.
The noble Baroness, Lady Freeman, raised extremely important points about the performance of automated decision-making. The Government already provide a range of products, but A Blueprint for Modern Digital Government, laid this morning, makes it clear that part of the new digital centre’s role will be to offer specialist insurance support, including, importantly in relation to this debate,
“a service to rigorously test models and products before release”.
That function will be in place and available to departments.
On Amendments 34 and 35, my noble friend Lady Jones previously advised the noble Lord, Lord Clement-Jones, that the Government would publish new algorithmic transparency recording standard records imminently. I am pleased to say that 14 new records were published on 17 December, with more to follow. I accept that these are not yet in the state in which we would wish them to be. Where these amendments seek to ensure that the efficacy of such systems is evaluated, A Blueprint for Modern Digital Government, as I have said, makes it clear that part of the digital centre’s role will be to offer such support, including this service. I hope that this provides reassurance.
My Lords, we have waited with bated breath for the Minister to share his hand, and I very much hope that he will reveal the nature of his bountiful offer of a code of practice on the use of automated decision-making.
I will wear it as a badge of pride to be accused of introducing an analogue concept by the noble Viscount, Lord Camrose. I am still keen to see the word “predominantly” inserted into the Bill in reference to automated decision-making.
As the Minister can see, there is considerable unhappiness with the nature of Clause 80. There is a view that it does not sufficiently protect the citizen in the face of automated decision-making, so I hope that he will be able to elaborate further on the nature of those protections.
I will not steal any of the thunder of the noble Baroness, Lady Kidron. For some unaccountable reason, Amendment 33 is grouped with Amendment 41. The groupings on this Bill have been rather peculiar and at this time of night I do not think any long speeches are in order, but it is important that we at least have some debate about the importance of a code of conduct for the use of AI in education, because it is something that a great many people in the education sector believe is necessary. I beg to move.
My Lords, I shall speak to Amendment 41 in my name and in the names of my noble friend Lord Russell, the noble Baroness, Lady Harding, and the noble Lord, Lord Clement-Jones. The House can be forgiven if it is sensing a bit of déjà-vu, since I have proposed this clause once or twice before. However, since Committee, a couple of things have happened that make the argument for the code more urgent. We have now heard that the Prime Minister thinks that regulating AI is “leaning out” when we should be, as the tech industry likes to say, leaning in. We have had Matt Clifford’s review, which does not mention children even once. In the meantime, we have seen rollout of AI in almost all products and services that children use. In one of the companies—a household name that I will not mention—an employee was so concerned that they rang me to say that nothing had been checked except whether the platform would fall over.
Amendment 41 does not seek to solve what is a global issue of an industry arrogantly flying a little too close to the sun and it does not grasp how we could use this extraordinary technology and put it to use for humankind on a more equitable basis than the current extractive and winner-takes-all model; it is far more modest than that. It simply says that products and services that engage with kids should undertake a mandatory process that considers their specific vulnerabilities related to age. I want to stress this point. When we talk about AI, increasingly we imagine the spectre of diagnostic benefits or the multiple uses of generative models, but of course AI is not new nor confined to these uses. It is all around us and, in particular, it is all around children.
In 2021, Amazon’s AI voice assistant, Alexa, instructed a 10 year-old to touch a live electrical plug with a coin. Last year, Snapchat’s My AI gave adult researchers posing as a 13 year-old girl tips on how to lose her virginity with a 31 year-old. Researchers were also able to obtain tips on how to hide the smell of alcohol and weed and how to conceal Snapchat conversations from their parents. Meanwhile, character.ai is being sued by the mother of a 14 year-old boy in Florida who died by suicide after becoming emotionally attached to a companion bot that encouraged him to commit suicide.
In these cases, the companies in question responded by implementing safety measures after the fact, but how many children have to put their fingers in electrical sockets, injure themselves, take their own lives and so on before we say that those measures should be mandatory? That is all that the proposed code does. It asks that companies consider the ways in which their products may impact on children and, having considered them, take steps to mitigate known risk and put procedures in place to deal with emerging risks.
One of the frustrating things about being an advocate for children in the digital world is how much time I spend articulating avoidable harms. The sorts of solutions that come after the event, or suggestions that we ban children from products and services, take away from the fact that the vast majority of products and services could, with a little forethought, be places of education, entertainment and personal growth for children. However, children are by definition not fully mature, which puts them at risk. They chat with smart speakers, disclosing details that grown-ups might consider private. One study found that three to six year-olds believed that smart speakers have thoughts, feelings and social abilities and are more reliable than human beings when it came to answering fact-based questions.
I ask the Minister: should we ban children from the kitchen or living room in which the smart speaker lives, or demand, as we do of every other product and service, minimum standards of product safety based on the broad principle that we have a collective obligation to the safety and well-being of children? An AI code is not a stretch for the Bill. It is a bare minimum.
My Lords, I will speak very briefly, given the hour, just to reinforce three things that I have said as the wingman to the noble Baroness, Lady Kidron, many times, sadly, in this Chamber in child safety debates. The age-appropriate design code that we worked on together and which she championed a decade ago has driven real change. So we have evidence that setting in place codes of conduct that require technology companies to think in advance about the potential harms of their technologies genuinely drives change. That is point one.
Point two is that we all know that AI is a foundational technology which is already transforming the services that our children use. So we should be applying that same principle that was so hard fought 10 years ago for non-AI digital to this foundational technology. We know that, however well meaning, technology companies’ development stacks are always contended. They always have more good things that they think they can do to improve their products for their consumers, that will make them money, than they have the resources to do. However much money they have, they just are contended. That is the nature of technology businesses. This means that they never get to the safety-by-design issues unless they are required to. It was no different 150 or 200 years ago as electricity was rolling through the factories of the mill towns in the north of England. It required health and safety legislation. AI requires health and safety legislation. You start with codes of conduct and then you move forward, and I really do not think that we can wait.
I thank the noble Lord, Lord Clement-Jones, for Amendment 33, and the noble Baroness, Lady Kidron, for Amendment 41, and for their thoughtful comments on AI and automated decision-making throughout this Bill’s passage.
The Government have carefully considered these issues and agree that there is a need for greater guidance. I am pleased to say that we are committing to use our powers under the Data Protection Act to require the ICO to produce a code of practice on AI and solely automated decision-making through secondary legislation. This code will support controllers in complying with their data protection obligations through practical guidance. I reiterate that the Government are committed to this work as an early priority, following the Bill receiving Royal Assent. The secondary legislation will have to be approved by both Houses of Parliament, which means it will be scrutinised by Peers and parliamentarians.
I can also reassure the noble Baroness that the code of practice will include guidance about protecting data subjects, including children. The new ICO duties set out in the Bill will ensure that where children’s interests are relevant to any activity the ICO is carrying out, it should consider the specific protection of children. This includes when preparing codes of practice, such as the one the Government are committing to in this area.
I understand that noble Lords will be keen to discuss the specific contents of the code. The ICO, as the independent data protection regulator, will have views as to the scope of the code and the topics it should cover. We should allow it time to develop those thoughts. The Government are also committed to engaging with noble Lords and other stakeholders after Royal Assent to make sure that we get this right. I hope noble Lords will agree that working closely together to prepare the secondary legislation to request this code is the right approach instead of pre-empting the exact scope.
The noble Lord, Lord Clement-Jones, mentioned edtech. I should add—I am getting into a habit now—that it is discussed in a future group.
Before the Minister sits down, I welcome his words, which are absolutely what we want to hear. I understand that the ICO is an independent regulator, but it is often the case that the scope and some of Parliament’s concerns are delivered to it from this House—or, indeed, from the other place. I wonder whether we could find an opportunity to make sure that the ICO hears Parliament’s wish on the scope of the children’s code, at least. I am sure the noble Lord, Lord Clement-Jones, will say similar on his own behalf.
It will be clear to the ICO from the amendments that have been tabled and my comments that there is an expectation that it should take into account the discussion we have had on this Bill.