(1 year ago)
Lords ChamberMy Lords, I declare my interests set out in full on the register, including as an advisor to the Institute for Ethics in AI at Oxford University, chair of the Digital Futures for Children centre at the LSE and chair of the 5Rights Foundation. I add my welcome to my noble friend Lord de Clifford, who I had the pleasure of meeting yesterday, and I look forward to his maiden speech.
I start by quoting Marcus Fysh MP who said in the other place:
“this is such a serious moment in our history as a species. The way that data is handled is now fundamental to basic human rights … I say to those in the other place as well as to those on the Front Benches that … we should think about it incredibly hard. It might seem an esoteric and arcane matter, but it is not. People might not currently be interested in the ins and out of how AI and data work, but in future you can bet your bottom dollar that AI and data will be interested in them. I urge the Government to work with us to get this right”.—[Official Report, Commons, 29/11/23; col. 878.]
He was not the only one on Report in the other place who was concerned about some of the provisions in the Bill, who bemoaned the lack of scrutiny and urged the Government to think again. Nor was he the only one who reluctantly asked noble Lords to send the Bill back to the other place in better shape.
I associate myself with the broader points made by both noble Lords who have already spoken—I do not think I disagreed with a word that they said—but my own comments will primarily focus on the privacy of children, the case for data communities, access for researchers and, indeed, the promises made to bereaved parents and then broken.
During the passage of the Data Protection Act 2018, your Lordships’ House, with cross-party support, introduced the age appropriate design code, a stand-alone data protection regime for the under-18s. The AADC’s privacy by design approach ushered in a wave of design change to benefit children: TikTok and Instagram disabled direct messaging from unknown adults to children; YouTube turned off auto-play; Google turned on safe search on by default for children; 18-plus apps were taken out of the Play Store; TikTok stopped notifications through the night; and Roblox stopped tracking and targeting children for advertising. These were just a handful of hundreds of changes to products and services likely to be accessed by children. Many of these changes have been rolled out globally, meaning that while other jurisdictions cannot police the code, children in those places benefit from it. As the previous Minister, the noble Lord, Lord Parkinson, acknowledged, it contributes to the UK’s reputation for digital regulation and is now being copied around the globe.
I set this out at length because the AADC not only drove design change, it also established the crucial link between privacy and safety. This is why it is hugely concerning that children have not been explicitly protected from changes that lessen user data protections in the Bill. I have given Ministers notice that I will seek to enshrine the principle that children have the right to a higher bar of data protection by design and default; to define children’s data as sensitive personal data in the Bill; and exclude children from proposals that risk eroding the impact of the AADC, notably in risk assessments, automated processing, onward processing, direct marketing and the extended research powers of commercial companies.
Minister Paul Scully said at Second Reading in the other place:
“We are committed to protecting children and young people online … organisations will still have to abide by our Age-appropriate design code”.—[Official Report, Commons, 17/4/23; col. 101.]
I take it from those words that any perception of, or diminution to, children’s data rights is inadvertent, and it remains the Government’s policy not to weaken the AADC as currently configured in the Bill. Will the Minister confirm that it is indeed the Government’s intention to protect the AADC and that he is willing to work with me to ensure that it is that the outcome? I will also seek a requirement for the ICO to create a statutory children’s code in relation to AI. The ubiquitous deployment of AI technology to recommend and curate is nothing new, but the rapid advances in generative AI capabilities marks a new stage in its evolution. In the hundreds of pages of the ICO’s non-binding Guidance on AI and Data Protection, its AI and Data Protection Risk Toolkit and its advice to developers on generative AI, there is but one mention of the word “child”—in a case study about child benefit.
The argument made was that children are covered by the AADC, which underlines again just how consequential it is. However, since adults are covered by data law but it is considered necessary to have specific AI guidance, the one in three users that is under 18 deserves the same consideration. I am not at liberty to say today, but later this week—perhaps as early as tomorrow—information will emerge that underlines the urgent need for specific consideration of children’s safety in relation to generative models. I hope that the Minister will agree that an AI code for kids is an imperative rather than nice to have.
Similarly, we must deliver data privacy to children in education settings. Given the extraordinary rate at which highly personal data seeps out of schools into the commercial world, including to gambling companies and advertisers, coupled with the scale of tech adoption in schools, it is untenable to continue to see tech inside school as a problem for schools and tech outside school as a problem for regulators. The spectre of a nursery teacher having enough time and knowledge to integrate the data protection terms of a singing app, or the school ICT lead having to tackle global companies such as Google and Microsoft to set the terms for their students’ privacy, is frankly ridiculous, but that is the current reality. Many school leaders feel abandoned by the Government’s insistence that they should be responsible for data protection when both the AADC and Online Safety Act have been introduced but they benefit from neither. It should be the role of the ICO to set data standards for edtech and to ensure that providers are held to account if they fall short. As it stands, a child enjoys more protection on the bus to school than in the classroom.
Finally on issues relating to children, I want to raise a technical issue around the production of AI-generated child sexual abuse material. I recognise the Government’s exemplary record on tackling CSAM but, unfortunately, innovation does not stop. While AI-generated child sexual abuse content is firmly in scope of UK law, it appears that the models or plug-ins trained on generating CSAM or trained to generate CSAM are not. At least four laws, the earliest from 1978, are routinely used to bring criminal action against CSAM and perpetrators of it, so I would be grateful if the Minister would agree to explore the issue with the police unit that has raised it with me and make an explicit commitment to close any gaps identified.
We are at an inflection point, and however esoteric and arcane the issues around data appear to be, to downgrade a child’s privacy even by a small degree has huge implications for their safety, identity and selfhood. If the Government fail to protect and future-proof children’s privacy, they will be simply giving with one hand in the OSA and taking away with the other in this Bill.
Conscious that I have had much to say about children, I will briefly put on the record issues that we can debate at greater length in Committee. While data law largely rests on the assumption of a relationship between an individual and a service, we have seen over a couple of decades that power lies in having access to large datasets. The Bill offers a wonderful opportunity to put that data power in the hands of new entrants to the market, be they businesses or communities, by allowing the sharing of individual data rights and being able to assign data rights to third parties for agreed purposes. I have been inspired by approaches coming out of academia and the third sector which have supported the drafting of amendments to find a route that would enable the sharing of data rights.
Similarly, as the noble Lord, Lord Knight, said, we must find a route to access commercial data sets for public interest research. I was concerned that in the other place when former Secretary of State Jeremy Wright queried why a much-touted research access had not materialised in the Bill, the Minister appeared to suggest that it was covered. The current drafting embeds the asymmetries of power by allowing companies to access user data, including for marketing and creating new products, but does not extend access for public interest research into the vast databases held by those same companies. There is a feeling of urgency emerging as our academic institutions see their European counter- parts gain access to commercial data because of the DSA. There is an increased need for independent research to support our new regulatory regimes such as the Online Safety Act. This is an easy win for the Government and I hope that they grasp it.
Finally, I noted very carefully the words of the Minister when he said, in relation to a coroner’s access to data, that the Secretary of State had made an offer to fill the gap. This is a gap that the Government themselves created. During the passage of the Online Safety Act we agreed to create a humane route to access data when a coroner had reason to suspect that a regulated company might have information relevant to the death of a child. The Government have reneged by narrowing the scope to those children taking their own life. Expert legal advice says that there are multiple scenarios under which the Government’s narrowing scope creates a gaping hole in provision for families of murdered children and has introduced uncertainty and delay in cases where it may not be clear how a child died at the outset.
I must ask the Minister what the Government are trying to achieve here and who they are trying to please. Given the numbers, narrowing scope is unnecessary, disproportionate and egregiously inhumane. This is about parents of murdered children. The Government lack compassion. They have created legal uncertainty and betrayed and re-traumatised a vulnerable group to whom they made promises. As we go through this Bill and the competition Bill, the Minister will at some points wish the House to accept assurances from the Dispatch Box. The Government cannot assure the House until the assurances that they gave to bereaved parents have been fulfilled.
I will stop there, but I urge the Minister to respond to the issues that I have raised rather than leave them for another day. The Bill must uphold our commitment to the privacy and safety of children. It could create an ecosystem of innovative data-led businesses and keep our universities at the forefront of tech development and innovation. It simply must fulfil our promise to families who this Christmas and every other Christmas will be missing a child without ever knowing the full circumstances surrounding that child’s death. That is the inhumanity that we in this House promised to stop—and stop it we must.
(1 year, 1 month ago)
Lords ChamberI think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.
My Lords, given the noble Viscount’s emphasis on the gathering of evidence and evidence-based regulation, can we anticipate having a researchers’ access to data measure in the upcoming Data Protection and Digital Information Bill?
I thank the noble Baroness for her question and recognise her concern. In order to be sure that I answer the question properly, I undertake to write to her with a full description of where we are and to meet her to discuss further.
(1 year, 1 month ago)
Lords ChamberMy Lords, I too welcome the right reverend Prelate the Bishop of Newcastle. I admire her bravery in wearing the colours of Sunderland and Newcastle simultaneously.
I declare my interests as chair of 5Rights Foundation, chair of the Digital Futures Commission at the LSE and adviser to the Institute for Ethics in AI at Oxford. Like others, I will start with Bletchley Park. That was kicked off by the Prime Minister, who set out his hopes for an AI-enabled world, while promising to tackle head-on its potential dangers. He said:
“Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse”—
but these are not potential dangers; they exist here and now.
In the race for AI prominence and the vast riches the technology promises, the tech leaders came to town warning us that the future they are creating is untrammelled, unprincipled and insecure and that AI will overwhelm human agency. I think that that language of existential threat makes for fabulous headlines, but it rather disempowers the rest of us. Because, if we ask if we want to supercharge the creation of child sexual abuse material, I would hazard a guess that the answer will be no; or if it is okay for facial recognition trained on white faces to prevent a black parent or child getting a security pass to enter a school, again no; or if we believe that just because something is technically possible—the creation of a disease or a weapon—it should be done, again no. Indeed, we have a record of containing the distribution of inventions that have the capability of annihilating us.
AI is not separate and different, and the language that we use to describe it—either its benefits or threats—must make that clear. AI is built, used and purveyed by business, government, civil society and even criminals. It is part of the human arrangements over which, for the moment, we still have agency. Language that disempowers us is part of the deliberate strategy of tech exceptionalism, advocated by industry lobbyists over decades, which has successfully secured the privatisation of technology, creating untold wealth for a few while outsourcing the cost to society. Who owns AI, who benefits, who is responsible and who gets hurt is still in the balance and I would assert that these are questions that we must deal with here and now.
I was disappointed to hear the noble Viscount say earlier at Questions that the Government were taking a sit-back-and-wait approach, so I have three rather more modest questions for the Minister, each of which could be tackled here and now. The first is: what plans do the Government have to ensure the robust application of our existing laws? As we saw earlier, the large language models and image creation services have used copyright material at scale. Getty Images has been testing it in court on behalf of its artists and photographers, but other rights holders, including some of the world’s finest authors, are unable to challenge this on an individual basis while their art and livelihood is scraped into vast datasets from which they do not benefit. I ask the Minister whether it would be a good idea to have an analysis of how new models are failing to uphold existing law and rights obligations as a first and urgent task for the new AI Safety Institute.
Secondly, how do the Government plan to use their legislative programme to tackle gaps that have been identified? For example, the creation, distribution and consumption of CSAM content is illegal, covered by at least three separate laws in the UK. But not one of these laws covers the models or plug-ins that create CSAM at scale—in one case, more than 20,000 images in a matter of hours—so the upcoming data protection Bill provides us with an opportunity to make training, sharing and possessing software that is trained on or trained to produce CSAM content an offence.
Also on the Prime Minister’s list is disinformation. Synthetic information that passes for real is also a here and now problem: the London Mayor, whose voice was fabricated, celebrities falsely endorsing products or a child’s picture scraped from a school website to train those aforesaid CSAM models. The loss of control of one’s personhood carries with it a democratic deficit and potentially overwhelming individual suffering. I ask the Minister whether the Government are willing to put beyond doubt that AI-generated biometric and image data constitutes a form of personal data over which an individual, whether adult or child, has rights, including the right to object to its use.
Both the data Bill and the digital markets Bill could create new data models—a subject that the noble Baroness, Lady Stowell, articulated very well in a recent article in the Times. New approaches to data rights, with new owners of data, are one way of having a voice in our AI-enabled future.
Thirdly and finally, I would like to ask the Minister why the Government have left children on the margins. I attended two official fringe events of the summit, one hosted by the then Home Secretary about child sexual abuse, the other convened by St Mary’s and the Turing Institute about embedding children’s rights in AI systems. Children are early adopters of technology—canaries in the coal mine—and many of us know the cost of poorly regulated digital environments for them. I am bewildered that, so soon after Royal Assent to the Online Safety Act, and in clear sight of the challenges that AI brings, the Government risk downgrading children’s data rights rather than explicitly protecting the age-appropriate design code and the definitions on which it is founded. Children should have been front and centre of the concerns at Bletchley, not pushed to the fringe, and perhaps the Minister could repair that damage by putting them front and centre of the new AI Safety Institute. After all, it is children who will inhabit the world we are building.
Finally, AI will create enormous benefits and upheaval across all sectors, but it also promises to put untold wealth and power in the hands of even fewer people. However, there are things in the here and now that we can do to ensure that technology innovates in ways that support human agency. It is tech exceptionalism that poses an existential threat to humanity, not the technology itself.