(8 months, 3 weeks ago)
Lords ChamberMy Lords, I too am very grateful to the noble Lord, Lord Holmes of Richmond, for introducing this important Artificial Intelligence (Regulation) Bill. In my contribution today, I will speak directly to the challenges and threats posed to visual artists by generative AI and to the need for regulatory clarity to enable artists to explore the creative potential of AI. I declare my interest as having a background in the visual arts.
Visual artists have expressed worries, as have their counterparts in other industries and disciplines, about their intellectual property being used to train AI models without their consent, credit or payment. In January 2024, lists containing the names of more than 16,000 non-consenting artists whose works were allegedly used to train the Midjourney generative AI platform were accidentally leaked online, intensifying the debate on copyright and consent in AI image creation even further.
The legality of using human artists’ work to train generative AI programmes remains unclear, but disputes over documents such as the Midjourney style list, as it became known, provide insight into the real procedures involved in turning copyrighted artwork into AI reference material. These popular AI image-generator models are extremely profitable for their owners, the majority of whom are situated in the United States. Midjourney was valued at around $10.5 billion in 2022. It stands to reason that, if artists’ IP is being used to train these models, it is only fair that they be compensated, credited and given the option to opt out.
DACS, the UK’s leading copyright society for artists, of which I am a member, conducted a survey that received responses from 1,000 artists and their representatives, 74% of whom were concerned about their own work being used to train AI models. Two-thirds of artists cited ethical and legal concerns as a barrier to using such technology in their creative practices. DACS also heard first-hand accounts of artists who found that working creatively with AI has its own set of difficulties, such as the artist who made a work that included generative AI and wanted to distribute it on a well-known global platform. The platform did not want the liabilities associated with an unregistered product, so it asked for the AI component to be removed. If artists are deterred from using AI or face legal consequences for doing so, creativity will suffer. There is a real danger that artists will miss out on these opportunities, which would worsen their already precarious financial situation and challenging working conditions.
In the same survey, artists expressed fear that human-made artworks will have no distinctive or unique value in the marketplace in which they operate, and that AI may thereby render them obsolete. One commercial photographer said, “What’s the point of training professionally to create works for clients if a model can be trained on your own work to replace you?” Artists rely on IP royalties to sustain a living and invest in their practice. UK artists are already low-paid and two-thirds are considering abandoning the profession. Another artist remarked in the survey, “Copyright makes it possible for artists to dedicate time and education to become a professional artist. Once copyright has no meaning any more, there will be no more possibility to make a living. This will be detrimental to society as a whole”.
It is therefore imperative that we protect their copyright and provide fair compensation to artists whose works are used to train artificial intelligence. While the Bill references IP, artists would have welcomed a specific clause on remuneration and an obligation for owners of copyright material used in AI training to be paid. To that end, it is therefore critical to maintain a record of every work that AI applications use, particularly to validate the original artist’s permission. It is currently not required by law to reveal the content that AI systems are trained on. Record-keeping requirements are starting to appear in regulatory proposals related to AI worldwide, including those from China and the EU.
The UK ought to adopt a similar mandate requiring companies using material in their AI systems to keep track of the works that they have learned and ingested. To differentiate AI-generated images from human-composed compositions, the Government should make sure that any commercially accessible AI-generated works are branded as such. As the noble Lord, Lord Holmes, has already mentioned, labelling shields consumers from false claims about what is and is not AI-generated. Furthermore, given that many creators work alone, every individual must have access to clear, appropriate redress mechanisms so that they can meaningfully challenge situations where their rights have been misused. Having said that, I welcome the inclusion in the Bill that any training data must be preceded by informed consent. This measure will go some way to safeguarding artists’ copyright and providing them with the necessary agency to determine how their work is used in training, and on what terms.
In conclusion, I commend the noble Lord, Lord Holmes, for introducing this Bill, which will provide much-needed regulation. Artists themselves support these measures, with 89% of respondents to the DACS survey expressing a desire for more regulation around AI. If we want artists to use AI and be creative with new technology, we need to make it ethical and viable.
(1 year, 4 months ago)
Lords ChamberMy Lords, I too add my thanks to the noble Lord, Lord Ravensdale, for securing today’s timely debate. With rapid advancements in artificial intelligence, the possibilities seem boundless, but they also come with potentially significant risks. Like the noble Lord, Lord Kakkar, I will speak to how the opportunities and risks of the development of AI pertain to healthcare.
Machine learning, and more recently deep learning—commonly referred to as AI—have already shown remarkable potential in various fields, and both harbour opportunities to transform healthcare in ways that were previously unimaginable. AI can be used to process vast amounts of medical data, including patient records, genomic information and imaging scans, and to assist doctors in more accurate and timely diagnosis and prognosis. The early detection of diseases and personalised treatment plans can dramatically improve people’s quality of life and help save countless lives. AI can be used to analyse the genetic make-up of patients, and, in time, will better predict how individuals will respond to specific treatments, leading to more targeted and effective therapies, reducing adverse reactions and improving overall treatment success rates.
AI-assisted automation can streamline administrative tasks, freeing up healthcare professionals to focus on direct patient care. That has the potential to improve productivity dramatically in healthcare, as well as patient satisfaction, at a time when waiting lists and workforce shortages are, rightly, giving rise to concerns about their impact on our well-being and the UK economy. AI-powered algorithms can significantly accelerate, and thereby derisk, drug discovery and development, potentially leading to new breakthrough medications for diseases that have remained incurable.
While the promises of AI in healthcare are alluring, we must acknowledge its limitations and the potential risks associated with its development. The use of vast amounts of data to train AI models is bound to raise concerns about data privacy and security. Unauthorised access or data breaches could lead to severe consequences for public trust in new uses of this potentially game-changing technology. The models which underpin AI are only as good as the datasets they are trained on. Bias in the data underpinning AI in healthcare could lead to discriminatory decisions and exacerbate healthcare inequalities. Complex algorithms can be challenging to interpret, leading to a lack of transparency in decision-making processes. This opacity is liable to raise questions about accountability and give rise to new ethical considerations. We must ensure that we do not enter trading arrangements which might prevent our being able to assess the efficiency and risks associated with AI development elsewhere for its use in healthcare settings.
Crucially, where risks have the potential to be matters of life or death, we must resist the temptation to underresource pertinent regulators, and we should be mindful of hyperbole in our pursuit of innovation. To harness fully the potential of AI in healthcare while mitigating its risks, comprehensive and adaptive regulatory frameworks are imperative, both at national and international levels. The UK Government, in collaboration with international organisations, should commit to developing common standards and guardrails, by making the most of the global summit on AI safety that they will host in the autumn and contributing to the Hiroshima AI Process established by the G7. Any guardrails should be guided by the precautionary principle and prioritise patient safety, both now and in the future.
AI used in healthcare must undergo rigorous testing and validation to ensure its accuracy, safety and effectiveness. Independent bodies such as the MHRA can oversee this process if they are appropriately resourced, instilling confidence in both healthcare providers and patients. As the noble Lords, Lord Browne, Lord Bilimoria and Lord Holmes, and others said, the public should be involved in shaping AI regulations. While the Government’s AI task force is to be welcomed, it is imperative that civil society be engaged in the development of standards and guardrails applicable to AI in healthcare from the outset. The ongoing development of AI in healthcare harbours immense promise and potential. However, it is crucial that we approach this transformative technology with a careful understanding of its risks and a clear commitment to robust regulation and maintaining public trust. By fostering collaboration, we must usher in a new era of healthcare that is safer and more efficient and delivers improved patient outcomes for all.