Data (Use and Access) Bill [HL] Debate
Full Debate: Read Full DebateBaroness Kidron
Main Page: Baroness Kidron (Crossbench - Life peer)Department Debates - View all Baroness Kidron's debates with the Department for Business and Trade
(5 days, 17 hours ago)
Grand CommitteeMy Lords, unusually, I rise to move an amendment, Amendment 138. For the second time in Committee, I find myself heading a group when I know that the noble Baroness, Lady Kidron, will be much better qualified to introduce the subject. Indeed, she has an amendment, Amendment 141, which is far preferable in many ways to mine.
Amendment 138 is designed to ensure that the Information Commissioner produces a code of practice specific to children up to the age of 18 for the purposes of UK law and Convention 108, and pupils as defined by the Education Act 1996, who may be up to the age of 19 or, with special educational needs, up to 25 in the education sector. The charity Data, Tech & Black Communities put it this way in a recent letter to the noble Baroness, Lady Jones:
“We recently completed a community research project examining the use of EdTech in Birmingham schools. This project brought us into contact with over 100 people … including parents, school staff and community members. A key finding was the need to make it easier for those with stewardship responsibility for children’s data, to fulfil this duty. Even with current data protection rights, parents and guardians struggle to make inquiries (of schools, EdTech companies and even DfE) about the purpose behind the collection of some of their children’s data, clarity about how it is used (or re-used) or how long data will be retained for. ‘Opting out’ on behalf of their children can be just as challenging. All of which militates against nuanced decision-making about how best to protect children’s short and long-term interests … This is why we are in support of an ICO Code of Practice for Educational Settings that would enable school staff, parents and learners, the EdTech industry and researchers to responsibly collect, share and make use of children’s data in ways that support the latter’s agency over their ‘digital selves’ and more importantly, will support their flourishing”.
The duties of settings and data processers and rights appropriate to the stage of education and children’s capacity needs clarity and consistency. Staff need confidence to access and use data appropriately within the law. As the UNCRC’s General Comment No. 16 (2013) on State Obligations Regarding the Impact of the Business Sector on Children’s Rights set out over a decade ago,
“the realization of children’s rights is not an automatic consequence of economic growth and business enterprises can also negatively impact children’s rights”.
The educational setting is different from only commercial interactions or in regard to the data subjects being children. It is more complex because of the disempowered environment and its imbalance of power between the authority, the parents and the child. The additional condition is the fact that parents’ and children’s rights are interlinked, as exemplified in the right to education described in UDHR Article 26(3), which states:
“Parents have a prior right to choose the kind of education that shall be given to their children.”
A code is needed because the explicit safeguards are missing that the GDPR requires in several places but were left out of the UK Data Protection Act 2018 drafting. Clause 80 of the Bill—“Automated decision-making”—does not address the necessary safeguards of GDPR Article 23(1) for children. Furthermore, removing the protections of the balancing test under the recognised legitimate interest condition will create new risks. Clauses on additional further processing or changes to purpose limitation are inappropriately wide without child-specific safeguards. The volume, sensitivity and intrusiveness of identifying personal data collection in educational settings only increases, while the protections are only ever reduced.
Obligations specific to children’s data, especially
“solely automated decision-making and profiling”
and exceptions, need to be consistent with clear safeguards by design where they restrict fundamental freedoms. What does that mean for children in practice, where teachers are assumed to be the rights bearers in loco parentis? The need for compliance with human rights, security, health and safety, among other standards proportionate to the risks of data processing and respecting the UK Government’s accessibility requirements, should be self-evident and adopted in a code of practice, as recommended in the five rights in the Digital Futures Commission’s blueprint for educational data governance.
The Council of Europe Strategy for the Rights of the Child (2022-2027) and the UNCRC General Comment No. 25 on Children’s Rights and the Digital Environment make it clear that
“children have the right to be heard and participate in decisions affecting them”.
They recognise that
“capacity matters, in accordance with their age and maturity. In particular attention should be paid to empowering children in vulnerable situations, such as children with disabilities.”
Paragraph 75 recognises that surveillance in educational settings should not take place without the right to object and that teachers need training to keep up with technological developments.
Participation of young people themselves has not been invited in the development of this Bill and the views of young people have not been considered. However, a small sample of parent and pupil voices has been captured in the Responsible Technology Adoption Unit’s public engagement work together with the DfE in 2024. The findings back those of Defend Digital Me’s Survation poll in 2018 and show that parents do not know that the DfE already holds named pupil records without their knowledge or permission and that the data is given away to be reused by hundreds of commercial companies, the DWP, the Home Office and the police. It stated:
“There was widespread consensus that work and data should not be used without parents’ and/or pupils’ explicit agreement. Parents, in particular, stressed the need for clear and comprehensive information about pupil work and data use and any potential risks relating to data security and privacy breaches.”
A code of practice is needed to explain the law and make it work as intended for everyone. The aims of a code of practice for educational settings would be that adherence to a code creates a mechanism for controllers and processors to demonstrate compliance with the legislation or approve certification methods. It would give providers confidence in consistent and clear standards and would be good for the edtech sector. It would allow children, parents, school staff and systems administrators to build trust in safe, fair and transparent practice so that their rights are freely met by design and default.
Further, schools give children’s personal data to many commercial companies during a child’s education—not based on consent but assumed for the performance of a task carried out in the public interest. A code should clarify any boundaries of this lawful basis for commercial purposes, where it is an obligation on parents to provide the data and what this means for the child on reaching maturity or after leaving the educational setting.
Again, a code should help companies understand “data protection by design and default” in practice, and appropriate “significant legal effect”, the edges of “public interest” in data transfers to a third country, and how special categories of data affect children in schools. A code should also support children and families in understanding the effect of the responsibilities of controllers and processes for the execution or limitation of their own rights. It would set out the responsibilities of software platforms that profile users’ metadata to share with third parties, or of commercial apps signed up for in schools that offer adverts in use.
I hope that I have explained exactly why we believe that a code of conduct is required in educational settings. I beg to move.
My Lords, I support and have added my name to Amendment 138 in the name of the noble Lord, Lord Clement-Jones. I will also speak to Amendment 141 in my name and those of the noble Lords, Lord Knight and Lord Russell, and the noble Baroness, Lady Harding.
Both these amendments propose a code of practice to address the use of children’s data in the context of education. Indeed, they have much in common. Having heard the noble Lord, Lord Clement-Jones, I have much in common with what he said. I associate myself entirely with his remarks and hope that mine will build on them. Both the amendments point to the same problem that children’s data is scandalously treated in our schools and educators need support; this is a persistent and known failure that both the DfE and the ICO have failed to confront over a period of some years.
Amendment 141 seeks to give a sense of exactly what an education code should cover. In doing so, it builds on the work of the aforementioned Digital Futures for Children centre at the LSE, which I chair, the work of Defend Digital Me, the excellent work of academics at UCL, and much of the work relating to education presented to the UN tech envoy in the course of drafting the UN global digital compact.
Subsection (1) of the proposed new clause would require the ICO to prepare a code of practice in connection with the provision of education. Subsection (2) sets out what the ICO would have to take into account, such as that education provision includes school management and safeguarding as well as learning; the different settings in which it takes place; the need for transparency and evidence of efficacy; and all the issues already mentioned, including profiling, transparency, safety, security, parental involvement and the provision of counselling services.
Subsection (3) would require the ICO to have regard to children’s entitlement to a higher standard of protection—which we are working so hard in Committee to protect—their rights under the UNCRC and their different ages and stages of development. Importantly, it also refers to the need and desire to support innovation in education and the need to ensure that the benefits derived from the use of UK children’s data accrue to the UK.
Subsection (4) lists those whom the commissioner would have to consult, and subsection (5) sets out when data processors and controllers would be subject to the code. Subsection (6) proposes a certification scheme for edtech services to demonstrate compliance with UK GDPR and the code. Subsection (7) would require edtech service and product providers to evidence compliance—importantly, transferring that responsibility from schools to providers. Subsection (8) simply defines the terms.
A code of practice is an enabler. It levels the playing field, sets terms for innovators, creates sandbox or research environments, protects children and supports schools. It offers a particularly attractive environment for developing the better digital world that we would all like to see, since schools are identifiable communities in which changes and outcomes could be measured.
My Lords, I shall also speak to Amendment 198 in my name and register my support for the amendments in the name of the noble Lord, Lord Bethell, to which I have added my name. Independent research access is a very welcome addition to the Bill by the Government. It was a key recommendation of the pre-legislative scrutiny committee on the Online Safety Bill in 2021 and I know that I speak for many colleagues in the academic field, as well as many civil society organisations, who are delighted by its swift and definitive inclusion in the Bill.
The objective of these amendments is not to derail the Government’s plans, but rather to ensure that they happen and to make the regime work for children and the UK’s world-class academic institutions and stellar civil society organisations, ensuring that we can all do high-quality research about emergent threats to children and society more broadly.
Amendment 197 would ensure that the provisions in Clause 123 are acted on by removing the Government’s discretion as to whether or not they introduce regulations. It would also impose a deadline of 12 months for the Government to do so. I have said this before, but I have learnt the hard way that good intentions and warm words from the Dispatch Box are a poor substitute for clear provisions in law. A quick search of the Bill reveals that there are 119 uses of the word “must” and 262 uses of the word “may”. Clearly, they are being used to create different obligations or expectations. The Minister may say that this amendment is not needed and that, for all intents and purposes, we can take the word “may” as a “must” or a “will”, but I would prefer to see it in black and white. In fact, if the Government have reserved discretion on this point, I would like to understand exactly what that means for research.
Amendment 198 seeks to ensure that the regulations will enable independent researchers to research how online risks and harms impact different groups, especially vulnerable users including children. We have already discussed the fact that online harms are not experienced equally by users: those who are most vulnerable offline are often the most vulnerable online. In an earlier debate, I talked about the frustrations experienced when tech companies do not report data according to age groups. In failing to do so, it is possible to hide the reality that children are disproportionately impacted by certain risks and harms. This amendment would ensure that children and other vulnerable groups can be studied in isolation, rather than leaving independent researchers to pick through generalised datasets to uncover where harm is amplified and for whom.
I will leave the noble Lord, Lord Bethell, to explain his amendments, but I will just say why it is so important that we have a clear path to researcher access. It is fundamental to the success of the online safety regime.
Many will remember Frances Haugen, the Facebook whistleblower, who revealed the extent to which Meta knew, through its own detailed internal research, how harmful their platforms actually are to young people. Meta’s own research showed that:
“We make body image issues worse for one in three girls”.
Some 32% of teen girls said that, when they have felt bad about their bodies, Instagram has made them feel worse. Were it not for a whistleblower, this research would never have been made public.
After a series of evidence disclosures to US courts as a result of the legal action by attorneys-general at state level, we have heard whistleblowers suggest, in evidence given to the EU, that there will be a new culture in some Silicon Valley firms—no research and no emails. If you have something to say, you will have to say it in person so that it cannot be used against them in court. The irony of that is palpable given the struggle that we are having about user privacy, but it points to the need for our research regime to be water- tight. If the companies are not looking at the impact of their own services, we must. I hope that the Government continue their leadership on this issue and accept the amendments in the spirit that they are being put forward.
I have another point that I want the Minister to clarify. I apologise, because I raised this in a private meeting but I have forgotten the answer. Given the number of regulatory investigations, proceedings and civil litigations in which tech companies are engaged, I would like some comfort about the legal exemption in these clauses. I want to understand whether it applies only to advice from and between lawyers or exempts data that may negatively impact companies’ defence or surface evidence of safety failures or deficiencies. The best way that I have of explaining my concern is: if it is habitual for tech companies to cc a lawyer in all their communications on product safety, trust and safety, and so on, would that give them legal privilege?
Finally, I support the noble Lord, Lord Clement-Jones, in his desire for a definition of independent researchers. I would be interested to hear what the Minister has to say on that. I beg to move.
My Lords, I will speak to my Amendments 198A and 198C to 198F. I also support Amendments 197, 198 and 198B, to which I have added my name, all of which address the issue of data for researchers.
As was put very thoughtfully by the noble Baroness, Lady Kidron, platforms are not making decisions about their services with due regard to product safety or with independent oversight. Ofcom’s work enforcing the Online Safety Act will significantly shift towards accountability, in some part, but it makes no provision at the moment on researchers’ data access, despite civil society and academic researchers being at the forefront of highlighting online harms for a decade. The anecdotes that the noble Baroness just gave were a very powerful testimony to the importance of that. We are, in fact, flying completely blind, making policy and, in this Room, legislation without data, facts and insight about the performance and algorithms that we seek to address. Were it not for the whistleblowers, we would not have anything to go on and we cannot rely on whistleblowers to guide our hands.
Rectifying this admission is in the Bill, and I am enormously grateful to the Minister and to the role of my noble friend Lord Camrose for putting it in the Bill. It is particularly important, because the situation with data for researchers has deteriorated considerably, even in the last 18 months—with Meta shutting CrowdTangle and X restricting researchers’ access to its API. The noble Baroness, Lady Kidron, spoke about what the whistleblowers think, and they think that this is going to get a lot worse in the future.
I welcome the inclusion of these provisions in the Bill. They will be totally transformational to this sector, bringing a level of access to serious analysts and academics, so we can better understand the impact of the digital world, for both good and bad. A good example of the importance of robust research to inform policy-making was the Secretary of State’s recent announcement that the Government were launching a
“research project to explore the impact of social media on young people’s wellbeing and mental health”.—[Official Report, Commons, 20/11/24; col. 250.]
That project will not be very effective if the researchers cannot access the data, so I very much hope that these provisions will be enforced before they start spending money on that.
To be effective and to have the desired effect, we need to ensure that the data for researchers regime, as described in the Bill, is truly effective and cannot be easily brushed off. That is why the Government need to accept the amendments in this group: to bring some clarity and to close loopholes in the scheme as it is outlined in the Bill.
I will briefly summarise the provisions in the amendments in my name. First, we need to make researcher access regulations enforceable in the same way as other requirements in the Online Safety Act. The enforcement provisions in that Act were strengthened considerably as it passed through this House, and I believe that the measures for data for researchers need to be given the same rocket boosters. Amendment 198D will mean that regulated services will be required to adhere to the regime and give Ofcom the power to levy proper remedial action if regulated services are obfuscating or non-compliant.
Secondly, we need to ensure that any contractual provision of use, such as a platform’s terms of service, is unenforceable if it would prevent
“research into online safety matters”,
as defined in the regulations. This is an important loophole that needs to be closed. It will protect UK researchers carrying out public interest research from nefarious litigation over terms of service violations as platforms seek to obfuscate access to data. We have seen this practice in other areas.
Thirdly, we need to clarify that researchers carrying out applicable research into online safety matters in the UK will be able to access information under the regime, regardless of where they are located. This is a basic point. Amendment 198E would bring the regime in line with the Digital Services Act of the EU and allow the world’s best researchers to study potential harm to UK users.
Ensuring robust researcher access to data contributes to a great ecosystem of investigation and scrutiny that will help to enforce an effective application of the law, while also guarding against overreach in terms of moderating speech. It is time to back UK civil society and academic researchers to ensure that policy-making and regulatory enforcement is as informed as possible. That is why I ask the Minister to support these measures.
My Lords, I thank noble Lords who have welcomed the provisions in the Bill. I very much appreciate that we have taken on board the concerns that were raised in the debates on the previous legislation. I thank the noble Baroness, Lady Kidron, and the noble Lords, Lord Bethell and Lord Clement-Jones, for their amendments.
I will speak first to Amendment 197, tabled by the noble Baroness, Lady Kidron, which would compel the Secretary of State to create a framework and to do so within 12 months of passage. I understand and share her desire to ensure that a framework allowing researchers access is installed and done promptly. This is precisely why we brought forward this provision. I reassure her that the department will consult on the framework as soon as possible after the publication of Ofcom’s report.
Turning to Amendments 198 and 198B, tabled by the noble Baroness, Lady Kidron, and the noble Lord, Lord Clement-Jones, respectively, Clause 123 provides the Secretary of State with the power to make regulations relating to researchers’ access to data. I can reassure noble Lords that it does not limit the regulations to the non-exhaustive list of examples provided. I agree that fair and proportionate criteria for who is considered a researcher are critical to the success of the future framework. I reassure noble Lords that in the provision as currently written the Secretary of State can include in the design of the framework the specific requirements that a person must meet to be considered a researcher.
Turning to Amendments 198A and 198D, tabled by the noble Lord, Lord Bethell, while I am sympathetic to his desire to provide a future framework with the robust enforcement powers of the OSA, I assure him that as the provision is written, the Secretary of State can already use the existing enforcement powers of the OSA to support a future framework. Furthermore, should the evidence suggest that additional or different measures would be more effective and appropriate, this provision allows the Secretary of State the flexibility to introduce them.
Turning next to Amendments 198C and 198E, tabled by the noble Lord, Lord Bethell, I understand the spirit of these amendments and note the importance of this issue, given the global nature of the online world. It is entirely reasonable to allow researchers who are not based in the UK to utilise our researcher access framework, as long as the subject of their research is the experience of UK users online. I reassure him that the provisions as drafted already allow the Secretary of State to make regulations permitting non-UK-based researchers to use the framework where appropriate. We plan to use the evidence gathered through our own means and through Ofcom’s report to set out who will be eligible to use the framework in the secondary legislation.
Finally, turning to Amendment 198F, I am aware of the concern that researchers have encountered blockages to conducting research and I am sympathetic to the intentions behind the amendment. We must ensure that researchers can use the future framework without fear of legal action or other consequences. I am conscious that the noble Baroness, Lady Kidron, asked me a specific question about legal exemptions and I will write to her to make that answer much clearer. I reassure noble Lords that the Government are considering the specific issues that the noble Lord raises. For these reasons, I ask that the amendments not be pressed while the Government consider these issues further and I am of course happy to engage with noble Lords in the meantime.
My Lords, I thank the Minister and everyone who spoke. I do not think I heard an answer to the may/must issue and I think I need to say that just relying on Ofcom’s report to set the framework for the regime is not adequate, for two reasons. First, it is no news to the Committee that there is a considerable amount of disquiet about how the Online Safety Act has been reinterpreted without Parliament’s intention. During the passage of this Bill, we are trying to be really clear—we will win some and we will lose some—on the face of the Bill what Parliament’s intention is, so that the regulator really does what we agree, because that subject is currently quite contentious.
This is a new area and a lot of the issues that the Minister and, indeed, the noble Viscount, Lord Camrose, raised are here to be sorted out to make sure that we understand collectively what it will look like. Having said that, I would like the Government to have heard that we do not wish to rest on the actions of whistleblowers but we will be increasingly forced to do so if we do not have a good regime. We must understand the capacity of this sector to go to court. We are in court everywhere, all over the world; the sector has deep pockets.
Finally, I welcome the nitpicking of the noble Lord, Lord Arbuthnot. Long may he nitpick. We will make sure that he is content before Report. With that, I beg leave to withdraw the amendment.
My Lords, Amendment 203 is in my name and the names of the noble Lords, Lord Bethell, Lord Stevenson and Lord Clement-Jones. I thank noble Lords wholeheartedly for their support for this measure through two versions of this Bill. I believe that I speak for all signatories in recognising the support of a huge number of colleagues in both Houses and all parties who have expressed their support for this amendment.
It is my understanding that we are going to hear good news from the Dispatch Box. In the event that I am wrong, I shall have more to say once we have heard from the Minister. In the meantime, I want to explain what the problem is that the amendment seeks to solve.
It is illegal in the UK to possess or distribute child sexual abuse material, including AI-generated or computer-generated child sexual abuse material. The laws that the police use to enforce against CSAM are Section 1 of the Protection of Children Act 1978 and Section 160 of the Criminal Justice Act 1988, both of which create offences in respect of indecent photographs or pseudo-photographs of a child. AI content depicting child sexual abuse is illegal under these laws, but creating and distributing the software models needed to generate them is not, which means that those building and distributing software that allows paedophiles to generate bespoke child sexual abuse material have operated with impunity.
There are many services that allow anyone to take any public image and put it in a false situation. Although I have argued elsewhere that AI images should carry a mark of provenance, these services are not the subject of this amendment. This amendment is laser focused on criminalising AI models that are trained on or trained to create child sexual abuse material. They are specific, specialist and currently beyond the reach of the police. The models blend images of children—known children, stock photos, images scraped from social media, school websites or synthetic, fabricated AI depictions of children—with existing CSAM or pornography, and they allow paedophiles to generate bespoke CSAM scenarios of unimaginable depravity, as they are unmitigated by any restrictions that organise the reality of the world. If someone can think, type or say it, they can make it so.
Many of the generative models are distributed for free, but more specialist models are provided on subscription for less than £50 per month. This payment provides any child sexual offender with the ability to generate limitless—and I do mean limitless—child sexual abuse images. But while the police can take action against those who possess those images, they are unable to take action against those who make it possible to do so: the means of production.
A surprising number of people think that AI abuse is a victimless crime, and I want to make it clear that it is not. First, who would be comfortable with the image of their child or grandchild or their neighbour’s child being used in this way? Anyone, adult or child, can appear in AI-generated CSAM. I am not going to say how it can be done, because I do not want my words to be a set of instructions on the public record—but the reality is, any one of us, woman or man, though 99% are women, boy or girl, though it is mostly girls, is a potential victim. If your image is used in this way, you are a victim; if you are forced to watch or copy such imagery, you are a victim; and if you are a child whose real-life abuse is not caught because you are lost in a sea of AI-generated material, you are a victim. Then there is the normalisation of sexual violence against children, which poisons relationships—intimate, familial, across generations, genders and sexes. This is not a victimless crime.
My Lords, first, I thank the speakers for what were really powerful and largely unequivocal contributions.
I am grateful to the Minister. I was expecting something more a tiny bit expansive but I will take, on record, that we are going to make it a new offence for a person to make, adapt, possess, supply or offer to supply a CSA image generator, including any service, program or information in electronic form that is made, or adapted for use, to create or facilitate the creation of CSA material. I am expecting something that covers all that and I am expecting it shortly, as the Minister said. I again thank the Safeguarding Minister, Jess Phillips, for her tremendous efforts, as well as some of the civil servants who helped make it leap from one Government to the next. We can be content with that.
I feel less comfortable about the Minister’s answer to the noble Baroness, Lady Owen. We, women victims, experience the gaps in the law. If there are gaps in the law, it is our job, in this Committee and in the other place, to fix them. We all want the same thing; I know the Minister well enough to know that she wants the same thing. So I am going to push back and say that I will support the noble Baroness, Lady Owen, in trying to bring this measure back through this Bill. I believe that the mood of the Committee is with her so whatever mistakes there are on her patch will be fixed before Report, because this is not something that can wait. Kids and women are being hurt.
We all want to celebrate the digital world. I was an early adopter. I had one of those cameras on my computer before anyone else I knew did, so I could not speak to anyone; there was no one to talk to. We want this world to be good. We are not saying something different. On behalf of the noble Baroness, Lady Owen, who is nodding, let me just say that we will come back to this issue. I thank the Minister for her assurance on Amendment 203 and beg leave to withdraw.
My Lords, I am beginning to feel like the noble Lord, Lord Clement-Jones, but I reassure everyone that this is the last day of Committee.
I shall speak to the amendments in this group in my name and that of the noble Lords, Lord Stevenson—he is very sorry not to be in his place today—and Lord Clement-Jones, and my noble friend Lord Freyberg. I thank the News Media Association for its briefing and support. I also thank, for their wonderful and unlikely support, Sir Paul McCartney, Kate Mosse, Margaret Drabble and Richard Osman, alongside the many creative artists who have spoken, written and tweeted and are among the 37,000 people who signed a petition calling for swift action to protect their livelihoods.
I have already declared my interests for the Committee but I add, to be clear, that my husband is a writer of film, theatre and opera; and that, before I came to your Lordships’ House, I spent 30 years as a movie director. As such, I come from and live alongside a community for whom the unlicensed and illegal use of copyrighted content by generative AI developers is an existential issue. I am therefore proud to move and speak to amendments that would protect one of our most financially significant economic sectors, which contributes £126 billion in gross value added to UK GDP; employs 2.4 million people; and brings so much joy and understanding to the world.
Text and data mining without licence or permission is illegal in the UK, unless it is done specifically for research. This means that what we have witnessed over the past few years is intellectual property theft on a vast scale. Like many of the issues we have discussed in Committee, this wrongdoing has happened in plain sight of regulators and successive Governments. I am afraid that yesterday’s announcement of a consultation did not bring the relief the industry needs. As Saturday’s Times said,
“senior figures in the creative sector are scathing about the government plans”,
suggesting that the Secretary of State has drunk Silicon Valley’s “Kool-Aid” and that rights reservation is nonsense. An official at the technical briefing for the consultation said that
“rights reservation is a synonym for opt out”.
Should shopkeepers have to opt out of shoplifters? Should victims of violence have to opt out of attacks? Should those who use the internet for banking have to opt out of fraud? I could go on. I struggle to think of another situation where someone protected by law must proactively wrap it around themselves on an individual basis.
The value of our creative industries is not in question; nor is the devastation that they are experiencing as a result of non-payment of IP. A recent report from the International Confederation of Societies of Authors and Composers, which represents more than 5 million creators worldwide, said that AI developers and providers anticipate the market for GAI music and audiovisual content increasing from €3 billion to €64 billion by 2028 —much of it derived from the unlicensed reproduction of creators’ works, representing a transfer of economic value from creators to AI companies. Let there be no misunderstanding of the scale of the theft: we already know that the entire internet has been downloaded several times without the consent or financial participation of millions of copyright holders.
This transfer of economic value from writers, visual artists and composers across all formats and all genres to AI companies is not theoretical. It is straightforward: if you cannot get properly paid for your work, you cannot pay the rent or build a career. Nor should we be taken in by the “manufactured uncertainty” that Silicon Valley-funded gen AI firms and think tanks have sought to create around UK copyright law. Lobbyists and their mouthpieces, such as TechUK, speak of a lack of clarity—a narrative that may have led to Minister Chris Bryant claiming that the Government’s consultation was a “win-win”. However, I would like the Minister to explain where the uncertainty on who owns these copyrighted works lies. Also, where is the win for the creative industries in the government proposal, which in one fell swoop deprives artists of control and payment for their work—unless they actively wrap the law around them and say “no”—leaving them at the mercy of pirates and scrapers?
Last week, at a meeting in this House attended by a wide range of people, from individual artists to companies representing some of the biggest creative brands in the world, a judge from the copyright court said categorically that copyright lies with the creator. AI does not create alone; it depends on data and material then to create something else. A technological system that uses it without permission is theft. The call for a new copyright law is a tactic that delays the application of existing law while continuing to steal. Unlike the physical world, where the pursuit of a stolen masterpiece may eventually result in something of value being returned to its owner, in the digital world, once your IP is stolen, the value is absorbed and fragmented, hidden amid an infinite number of other data points and onward uses. If we continue to delay, much of the value of the creative industries’ rich dataset will be absorbed already.
The government consultation has been greeted with glee overnight by the CCIA, which lobbies for the biggest tech firms. After congratulating the Government at some length, it says that
“it will be critical to ensure that the transparency requirements are realistic and do not ask AI developers to compromise their work by giving away trade secrets and highly sensitive information that could jeopardise the safety and security of their models”.
In plain English, that means, “We have persuaded the Government to give up creatives’ copyright, and now the campaign begins to protect our own ‘sensitive business information’”. If that is not sufficiently clear to the Committee, that means they are, first, claiming their own IP while stealing others, while simultaneously pushing back at transparency, because they do not want an effective opt-out.
The government consultation does not even contain an option of retaining the current copyright framework and making it workable with transparency provisions—the provisions of the amendments in front of us. The Government have sold the creative industries down the river. Neither these amendments nor the creative community are anti-tech; on the contrary, they simply secure a path by which creatives participate in the world that they create. They ensure the continuous sustainable production of human-generated content into the future, for today’s artists and those of tomorrow. The amendments do not extend the fundamentals of the Copyright, Designs and Patents Act 1988, but they ensure that the law can be enforced on both AI developers and third parties that scrape on their behalf. They force transparency into the clandestine black box.
Amendment 204 requires the Secretary of State to set out the steps by which copyright law must be observed by web crawlers and others, making it clear that it applies during the entire lifecycle, from pretraining onwards, regardless of jurisdiction—and it must take place only with a licence or express permission.
Amendment 205 requires the Secretary of State to set out the steps by which web crawlers and general-purpose AI models are transparent. This includes but is not limited to providing a name for a crawler, identifying the legal entity responsible for it, a list of purposes for which it is engaged and what data it has passed on. It creates a transparent supply chain. Crucially, it requires operators of crawlers to disclose the businesses to which they sell the data they have scraped, making it more difficult for AI developers that purchase illegally scraped content to avoid compliance with UK copyright law, overturning current practice in which the operators of crawlers can obscure their own identity or ownership, making it difficult and time-consuming—potentially impossible—to combat illegal scraping.
Amendment 206 requires the Secretary of State to set out by regulation what information web crawlers and general-purpose models must disclose regarding copyrighted works—information such as URL, time and type of data collected and a requirement to inform the copyright holder. This level of granularity, which the tech companies are already pushing against, provides a route by which IP holders can choose or contest the ways in which their work is used, as well as provide a route for payment.
In sum, the amendments create a clear and simple process for identifying which copyright works are scraped, by whom, for what purpose and from which datasets. They provide a process by which existing law can be implemented.
I shall just mention a few more points before I finish. First, there is widespread concern that mashing up huge downloads of the internet, including the toxic, falsehoods and an increasing proportion of artificially generated or synthetic data, will cause it to degenerate or collapse, putting a block on the innovation that the Government and all of us want to see, as well as raising serious safety concerns about the information ecosystem. A dynamic licensing market would provide a continuous flow of identified human-created content from which AI can learn.
Secondly, the concept of a voluntary opt-out regime—or, as the Government prefer, rights reservation—is already dead. In the DPDI Bill, I and others put forward an amendment to make robots.txt part of the robots’ exclusion protocol opt-in. In plain English, that would have meant that the voluntary scheme in which any rights holder can put a note on their digital door saying “Don’t scrape” would have been reversed to be mandatory. Over the last few months, we have seen scrapers ignoring the agreed protocol, even when activated. I hope the Minister will explain why he thinks that creators should bear the burden and the scrapers should reap the benefit and whether the Government have done an impact assessment on how many rights holders will manage to opt out versus how many would opt in, given the choice.
As someone who has spent my life creating IP, protecting IP and sometimes giving IP away, I welcome this debate. I am extremely grateful to the noble Baroness, Lady Kidron, for a very thoughtful set of proposals. The fact that many noble Lords have spoken in this debate shows that the rapid development of AI has clearly raised concerns about how to protect the creative industries. The Government take this very seriously. As the noble Lord, Lord Lucas, pointed out, we need to get it right, which is why we have launched a very wide-ranging consultation on a package of interventions to address copyright and AI issues. It is an important first step in an area where the existing situation is clearly not working and we run the risk of many long-lasting court cases, which will not help the situation in which we find ourselves.
We are committed both to supporting human-centred creativity and to the potential of AI to unlock new horizons. Many in the creative industries use AI very widely already. Our goal is to support AI innovation in the UK while maintaining robust protection for creators and our vibrant creative industry. In response to a point that the noble Baroness, Lady Kidron, raised earlier, option 1 in the consultation refers to existing copyright law and asks for views about maintaining and increasing it. The consultation sets out the Government’s objectives for this area and proposes a range of measures on which we are seeking views. Specifically, it aims to support rights-holders to continue to exercise control over the use of their content and their ability to seek remuneration for this. As many noble Lords have pointed out, that has to be made easy and technically feasible. It also promotes greater trust and transparency and proposes mechanisms by which you can see who is looking at the data and what they are doing with it.
Finally, it aims to support the development of world-leading AI models in the UK by ensuring that access can be appropriately wide but, of course, lawful and with the approval of those it is got from. This includes the subjects of the noble Baroness’s amendments. The consultation seeks views on technological measures that can provide greater control over access to and use of the online material, as well as transparency measures that help copyright owners understand whether their work is being used by AI developers. Again, this needs to be made easy. Various technologies are coming along which can do that, including, as has been said, the watermarking approach.
Much of this needs to be wrapped into an approach to standards. It is important that this is done in a way that is reproducible and reliable. Through this consultation, we will address some of these issues and seek to continue to get input from stakeholders on all of them. We will also work towards internationally interoperable solutions, as raised by many noble Lords, including the noble Lord, Lord Freyberg, and the noble Earl, Lord Effingham.
I agree with the noble Baroness, Lady Kidron, that a vibrant and effective licensing approach—a system that works well and provides access and rights—is important. She asked about an impact assessment. I do not have the information with me now, but I will write. I look forward to updating her on this work in due course and, in the meantime, hope that she is content to withdraw her amendment.
Does the Minister recognise the characterisation of noble Lords who have said that this is theft? Currently, we have a law and copyright is being taken without consent or remuneration. Does he agree with them that this is what the creative industries and, I presume, some of his community are experiencing?
At the moment we have a system where it is unclear what the rights are and how they are being protected, and therefore things are being done which people are unable to get compensation for. We can see that in the court cases going on at the moment. There is uncertainty which needs to be resolved.
I thank the Minister for his answer and welcome him very much to the Dispatch Box—I have not yet had the pleasure of speaking with him in a debate. I hope he saw the shaking heads when he answered my question about theft and this lack of clarity. If you say “Write me the opening chapter of a Stephen King novel”, and the AI can do it, you can bet your bottom dollar that it has absorbed a Stephen King novel. We know that a lot of this material is in there and that it is not being paid for. That goes for issues big and small.
I understand that it is late and we have more to do—I have more to say on other issues—but I want to reiterate three points. First, creative people are not anti-tech; they just want control over the things they create. AI is a creation on top of a creation, and creative people want to be paid for their efforts and to be in control of them. I am not sure whether I can mention it, because it was in a private meeting, but a brand that many people in most countries will have heard of said: “We need to protect our brand. We mean something. An approximation of us is not us. It is not just the money; it is also the control”.
I also make the point that, earlier this week, Canal+ had its IPO on the London Stock Exchange. I heard the CEO answer the question, “Why is it that Canal+ decided to come and do its IPO in the UK when everybody else is scarpering elsewhere?”, by saying a lot of very warm-hearted things about Paddington Bear, then, “Because you have very good copyright laws”. That is what they said. I just want to mention that.
Finally, I am grateful to the Minister for saying that there is the option of staying with the status quo; I will look at that and try to understand it clearly. However, when he writes about the issue that I raised in terms of opting in or opting out—I am grateful to him for doing so—I would also like an answer about where the Government think the money is going to go. What is the secondary value of the AI companies, which are largely headquartered in the US? Where will the IP, which those companies have already said they want to protect—they did so in their response to the Government’s consultation; I said that it in my speech, for anyone who was not listening—go? I would like the Government to say what their plans are, if we lose the £1.6 billion and the 2.4 million jobs, to replace that money and those jobs, as well as their incredible soft power.
With that, I beg leave to withdraw the amendment.
My Lords, it is a privilege to introduce Amendment 207. I thank the noble Lords, Lord Arbuthnot and Lord Clement-Jones, and the right reverend Prelate the Bishop of St Albans, who is unfortunately unwell but wanted me to express his support.
I make it clear that, although I may use the Horizon scandal as an example, this amendment is neither focused on nor exclusive to the miscarriage of justice, malevolence and incompetence related to that scandal. It is far broader than that so, when the Minister replies, I really hope that he or she—I am not sure which yet—will not talk about the criminality of the Post Office, as previously, but rather concentrate on the law that contributed to allowing a miscarriage of justice at that scale. That is what this amendment seeks to address.
I explained during debates on the DPDI Bill that, since 1999, courts have applied
“a common law presumption, in both criminal and civil proceedings, of the proper functioning of machines—that is to say”,
the information from the computer can be presumed to be reliable. I went on to say:
“In principle, there is a low threshold for rebutting this presumption but, in practice … a person challenging evidence derived from a computer will typically have no””.—[Official Report, 24/4/24; col. GC 573.]
knowledge of the circumstance in which the system in question was operated so cannot possibly demonstrate that it failed. As Paul Marshall, the barrister who represented some of the postmasters, explains, this puts the onus on the defendant to explain to the jury the problems they encountered when all they could actually do was point to the shortfalls they had experienced—in the Horizon case, that the cash received did not match the balancing figure on the computer screen. They did not have access to the system or the record of its past failures, and they had no knowledge of what the vulnerabilities were. They only knew that it did not work.
The reality is that anyone who knows the first thing about programming or computer science knows that there are bugs in the system. Indeed, any one of us who has agreed to an update for an app or computer software understands that bug fixing is a common aspect of program maintenance. When I discussed this amendment with a computer scientist of some standing, he offered the opinion that there are likely to be 50 bugs per 1,000 lines of code; many complex systems run to tens of millions of lines of code.
Perhaps the most convincing thing of all is looking at software contracts. For the past 20 years at least, a contract is likely to contain words to this effect: “No warranty is provided that the operation of the software will be uninterrupted or error free, or that all software errors will be corrected”. This same clause applies in contracts when we say yes to a new Apple upgrade when we sign a EULA—an end-user licence agreement. In plain English, for two decades at least, those who provide software have insisted that computer information is not to be considered reliable. That is written into their commercial agreements, so the fact that computer information is not reliable is agreed by those who know about computer information.
My Lords, I thank the noble Baroness and the noble Lord, Lord Arbuthnot, for Amendment 207 and for raising this important topic. The noble Baroness and other noble Lords are right that this issue goes far wider than Horizon. We could debate what went wrong with Horizon, but the issues before us today are much wider than that.
The Government are agreed that we must prevent future miscarriages of justice. We fully understand the intention behind the amendment and the significance of the issue. We are actively considering this matter and will announce next steps in the new year. I reassure noble Lords that we are on the case with this issue.
In the meantime, as this amendment brings into scope evidence presented in every type of court proceeding and would have a detrimental effect on the courts and prosecution—potentially leading to unnecessary delays and, more importantly, further distress to victims—I must ask the noble Baroness whether she is content to withdraw it at this stage. I ask that on the basis that this is an ongoing discussion that we are happy to have with her.
I thank the Minister, in particular for understanding that this goes way beyond Horizon. I would be very interested to be involved in those conversations, not because I have the great truth but because I have access to people with the great truth on this issue. In the conversations I have had, there has been so much pushing back. A bit like with our previous group, it would have been better to have been in the conversation before the consultation was announced than after. On that basis, I beg leave to withdraw the amendment.
My Lords, the good news is that this is the last time I shall speak this evening. Amendment 211 seeks to ensure that the value of our publicly held large datasets is realised for the benefit of UK citizens.
Proposed new subsection (1) gives the Secretary of State the power to designate datasets held by public bodies, arm’s-length institutions or other sets held in the public interest as sovereign data assets.
Proposed new subsection (2) lists a number of requirements that the Secretary of State must have regard to when making a designation. Factors include the security and privacy of UK citizens, the ongoing value of the data assets, the rights of IP holders, the values, laws and international obligations of the UK, the requirement to give preferential access to UK-headquartered companies, organisations and the public sector, the requirement for data to be stored in the UK and the design of application programming interfaces facilitating access to the assets by authorised licence holders. It also sets out stakeholders whom the Secretary of State must consult when considering what datasets to designate as sovereign data assets. We heard in a previous debate that education data might be a good candidate.
Proposed new subsection (3) requires the setting up of a transparent licensing system. Proposed new subsection (4) requires those managing sovereign data assets to report annually on their value and anticipated return to UK subjects. This would include, for example, licence payments, profit share agreements and “in kind” returns, such as access to products or services built using sovereign data assets. Proposed new subsection (5) gives an oversight role to the National Audit Office, proposed new subsection (6) provides a definition, and proposed new subsections (7) and (8) specify that regulations made under the clause are subject to parliamentary approval.
When I raised this issue at Second Reading, the Minister answered positively, in that she felt that what I was suggesting was embodied in the plans for a national data library:
“The national data library will unlock the value of public data assets. It will provide simple, secure and ethical access to our key public data assets for researchers, policymakers and businesses, including those at the frontier of AI development, and make it easier to find, discover and make connections across those … databases. It will sit at the heart of an ambitious programme of reform that delivers the incentives, investment and leadership needed to secure the full benefits for people and the economy”.—[Official Report, 19/11/24; col. 196.]
That is a very valid and positive picture. My comments build on it, because since Second Reading, I have sought details about the national data library. It seems that plans are nascent and the level of funding, as I understand it, seems to match neither the ambition set out by the Minister nor what many experts think is necessary. One of my concerns—it will not surprise the Committee to hear, as it has come up a couple of times on previous groups—is that it appears to be a mechanism for facilitating access, rather than understanding, realising and protecting the value of these public data assets.
In the meantime, announcements of access to public data keep coming. We have worries about Palantir and the drip-feed of deals with OpenAI and Google, the latest of which was in the Health Services Journal, which said:
“The national Federated Data Platform will be used to train AI models for future use by the NHS, according to NHS England’s chief data and analytics officer”.
That sounds great, but the article went on to question the basis and the wrap-around. This is the question.
We in this House already understand the implications of an “adopt now, ask questions later”, approach. For example, as reported recently in Computer Weekly, Microsoft has now admitted to Scottish policing bodies that it cannot guarantee the sovereignty of UK policing data hosted on its hyperscale public cloud infrastructure. That is a huge problem for Police Scotland and one that is very likely to be mirrored across all sorts of government departments, in a technology that is central to so many departments. The proposed amendments offer a route to ask questions as you adopt technology, not after you have lost control.
My Lords, I am grateful to the noble Baroness, Lady Kidron, for her amendment. I agree with her that the public sector has a wealth of data assets that could be used to help our society achieve our missions and contribute to economic growth.
As well as my previous comments on the national data library, the Government’s recent Green Paper, Invest 2035: The UK’s Modern Industrial Strategy, makes it clear that we consider data access part of the modern business environment, so improving data access is integral to the UK’s approach to growth. However, we also recognise the value of our data assets as part of this approach. At the same time, it is critical that we use our data assets in a trustworthy and ethical way, as the noble Baroness, Lady Kidron, and the noble Lord, Lord Tarassenko, said, so we must tackle these issues carefully.
This is an active area of policy development for the Government, and we need to get it right. I must therefore ask the noble Baroness to withdraw her amendment. However, she started and provoked a debate that will, I hope, carry on; we would be happy to engage in that debate going forward.
I thank all speakers, in particular my noble friend Lord Tarassenko for his perspective. I am very happy to discuss this matter and let the Official Opposition know that this is a route to something more substantive to which they can agree. I beg leave to withdraw my amendment.