(2 days ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
It is a pleasure to serve under your chairship, Ms McVey. I thank my hon. Friend the Member for Bury North (Mr Frith) for securing this important debate.
The question about machine learning tools—AI—and their use in intellectual property is a key test of our time. What we decide to do now will have ramifications culturally, socially and economically long into the future. Large language models, a form of machine learning such as ChatGPT, have already used pirated and copyrighted material without the consent of the people who created it to train their models. It should be self-evident that that is a problem. It is a well-established right that people retain ownership of their work, with limited exceptions for education or critique. We have clear copyright laws. We have collective licensing schemes, yet those have been ridden over roughshod by machine learning developers.
I am not a luddite. I am very excited about the potential for machine learning to make our lives better, just as other technology has done before. The potential for large datasets to identify health concerns and make diagnostics more accurate—with a programme able to predict the folds of proteins, saving scientists time that they can spend on the next thorny issue—is exciting stuff. It is important to remember that technology is morally neutral. The technology itself is not good or bad. It is a tool—nothing more, nothing less—and we as humanity get to decide how we use that tool. To use that tool, we need to understand it, at least in terms of how we interact with it.
For example, we need to know that AI can lie. It will invent things. One of the best examples I have heard was when a large language model tool was asked by a huge “Doctor Who” fan to tell him about “Doctor Who” episodes and it simply made some up—perfectly plausible episodes that did not exist and have never existed. If anyone here is ever tempted to ask ChatGPT, be warned: it might not tell you the truth.
As well as understanding the potential and limitations of the technology itself, it is also important that we create frameworks that align with our values and do not roll over for mega-corporations that really do not care for our values. Meta, which owns Facebook, has argued that individual creative works have no value in themselves as they individually barely affect the performance of large language models. As a Vanity Fair article pointed out, it is a bit like an orchestra arguing that it should not pay an individual musician because the solo bassoon cannot play the whole piece by itself.
If large language model tech as a whole relies on creative works, and it does, then some form of respect for the rights of creatives must be found. We have existing copyright laws. We could simply enforce them and ensure that the tools are there to do so. I urge the Government to treat machine learning for what it is: a tool to be used well or used badly. Let us choose well.