Creative Industries: Rights Reservation Model Debate
Full Debate: Read Full DebateViscount Camrose
Main Page: Viscount Camrose (Conservative - Excepted Hereditary)Department Debates - View all Viscount Camrose's debates with the Department for Science, Innovation & Technology
(1 day, 11 hours ago)
Grand CommitteeI thank all noble Lords for their uniformly brilliant contributions to this important debate. I particularly thank the noble Lord, Lord Foster, for securing this debate and introducing it so powerfully. To start with a statement of the obvious: artificial intelligence can do us great good and great harm. I know we are hare mainly to avert the latter, but I open with a few thoughts on the former.
I should like to make two points in particular. First, the UK is often said to have a productivity problem and AI, even at its current level of capability, offers a great chance to fix this by automating routine tasks, improving decision-making and streamlining workflows. Secondly, it was often said, since the early days of e-commerce, that innovative use of technology was the preserve of the private sector, whereas the public sector was less nimble and consequently less productive. Those days must soon be over. Some of the best datasets, especially in this country, are public: health, education and geospatial in particular. Safely exploiting them will require close public-private collaboration, but if we are able to do so—and, I stress, do so safely—the productivity rewards will be extraordinary. This is why we, on these Benches, greatly welcome the AI action plan.
AI’s potential to revolutionise how we work and create is undeniable. In the creative industries, we have already seen its impact, with more than 38% of businesses incorporating AI technologies into their operations as of late last year. Whether in music, publishing, design or film, AI offers tools that enhance productivity, enable innovation, and open new markets. However, the key to all these prizes is public acceptance, the key to public acceptance is trustworthiness, and the key to trustworthiness is not permitting the theft of any kind of property, physical or intellectual.
This brings us to copyright and the rights of creators whose works underpin many of these advances. Copyright-protected materials are often used to train AI systems, too often without the permission, or even knowledge, of creators. Many persuasive and powerful voices push for laws, or interpretations of laws, in this country that prevent this happening. If we are able to create such laws, or such interpretations, I am all for them. I am worried, however, about creating laws we cannot enforce, because copyright can be enforced only if we know it has been infringed.
The size and the international distribution of AI training models render it extremely challenging to answer the two most fundamental questions, as I said on Tuesday. First, was a given piece of content used in a training model? Secondly, if so, in what jurisdiction did this take place? An AI lab determined to train a model on copyrighted content can do so in any jurisdiction of its choice. It may or may not choose to advise owners of scraped content, but my guess is that for a large model of 100 billion parameters, the lab might not be as assiduous in this as we would like. So, enforcement remains a significant challenge. A regulatory framework that lacks clear, enforceable protections risks being worse than ineffective in practice: it risks creating false confidence that eventually kills trust in, and public acceptance of, AI.
So, although we welcome the Government’s decision to launch a public consultation to address these challenges, it is vital that it leads to an outcome that does three things. First, needless to say, it must protect products of the mind from unlawful exploitation. Secondly, it must continue to allow AI labs to innovate, preferably in the UK. Thirdly, it must be enforceable. We all remember vividly Tuesday’s debate on Report of the DUA Bill. I worry that there is a pitfall in seeing AI and copyright policy as a zero-sum struggle between the first two of those objectives. I urge noble Lords, especially the Minister, to give equal emphasis and priority to all three of those goals.
I shall close with a few words on standards. As the Minister has rightly recognised, the key to an enforceable regime is internationally recognised technical standards, particularly, as I have argued, on digital watermarks to identify copyrighted content. A globally recognised, machine-readable watermark can alert scraping algorithms to copyrighted materials and alert rights holders to the uses of their materials. It may even allow rights holders to reserve their rights, opt out automatically or receive royalties automatically. In Tuesday’s debate, I was pleased to hear the Minister confirm that the Government will consider such standards as part of the consultation response.
Of course, the challenge here is that any such standards are—this is the bluntest possible way I can put it—either internationally observed and accepted or pointless. In this country, we have an opportunity to take the lead on creating them, just as we took the lead on setting standards for frontier AI safety in 2023 at Bletchley Park. I urge the Minister to strain every sinew to develop international standards. I say now that I and my party are most willing to support and collaborate on the development of such standards.