King’s Speech (4th Day) Debate
Full Debate: Read Full DebateViscount Colville of Culross
Main Page: Viscount Colville of Culross (Crossbench - Excepted Hereditary)Department Debates - View all Viscount Colville of Culross's debates with the Department for Science, Innovation & Technology
(4 months, 3 weeks ago)
Lords ChamberMy Lords, as many other noble Lords have said, artificial intelligence will revolutionise our economy and our society during the next decade. It will radically improve our productivity, research capability and delivery of public services, to name but a few, so I am pleased that the digital information and smart data Bill will enable innovative uses of data to be safely developed and deployed.
I hope that this Bill will begin to address the wider risks AI poses to us all unless it is developed and released safely. This Government need to ensure that AI develops to support our economy and society, and that it does not take society in dangerous and unintended directions. At all stages of the training and deployment of AI, there are economic and social risks. There are dangers the whole way through the supply chain, from the initial data ingestion of the massive datasets needed to set up these foundation models to their training and deployment, which I hope will begin to be addressed by the Bill.
My concern is that there can be differences in the inputting and modification of AI models that humans do not consider significant, but which could have major and possibly adverse effects on the behaviour of AI systems. It is essential that formal verification techniques are guaranteed throughout the whole process to prove their safety at all stages of the process.
However, the massive costs of training and developing these models, which can run into billions of pounds, have put huge pressure on the tech companies to monetise them and to do so quickly. This has led to rapid developments of systems, but underinvestment in safety measures. Many of us were impressed when at the Bletchley summit last year the previous Government obtained voluntary guarantees from the big AI developers to open up their training data and allow the latest generative AI models to be reviewed by the AI Safety Institute, allowing third-party experts to assess the safety of models. However, since then the commitment has not been adhered to. I am told that three out of four of the major foundation model developers have failed to provide pre-release access for their latest frontier models to the AI Safety Institute.
The tech companies are now questioning whether they need to delay the release of their new models to await the outcome of the institute’s safety tests. In a hugely competitive commercial environment, it is not surprising that the companies want to deploy them as soon as possible. I welcome the Government’s commitment during the election campaign to ensure that there will be binding regulation on big developers to ensure safe development of their models. I look forward to the Secretary of State standing by his promise to put on a statutory footing the release of the safety data from new frontier models.
However, these safety measures will take great expertise to enforce. The Government must give regulators the resources they need to ensure that they are effective. If the Government are to follow through with AI safety oversight by sectoral regulators, I look forward to the setting up of the new regulatory innovation office, which will both oversee where powers overlap and pinpoint lacunae in the regulation. However, I would like to hear from the Minister the extent of the powers of this new office.
I hope that at next year’s French summit on AI the Government will be at the centre of the development of standards of safety and will push for the closest international collaboration. There needs to be joint evaluations of safety and international co-operation of the widest kind—otherwise, developers will just go jurisdiction shopping—so the Government need not just to work closely with the US Artificial Intelligence Safety Institute and the new EU AI regulator but to ensure transparency. The best way to do this is to involve multi-stakeholder international organisations, such as the ISO and the UN-run ITU, in the process. It might be slower, but it will give vital coherence to the international agreement for the development of AI safety.
I am glad to hear the Minister say that the Government will lead a drive to make this country the centre of the AI revolution. It is also good that DSIT will be expanded to bring in an incubator for AI, along with the strategy for digital infrastructure development. I hope that this will be combined with support for the creative industries, which generated £126 billion of revenue last year and grew by 6%, an amazing performance when we look at the more sluggish performance of much of the rest of the economy. I hope that members of the creative industries will be on the Government’s new industrial strategy council and that the government-backed infrastructure bank will look not just at tangible assets but at the less tangible assets that need supporting and encouraging across the creative industries.
To bring together AI and the creative industries, the Government need to develop a comprehensive IP regime for datasets used to train AI models, as the noble Lord, Lord Holmes, just told us. There has been much in the press about the use of data without the creator’s consent, let along remuneration. I hope that DSIT and DCMS will come together to generate an IP regime that will have data transparency, a consent regime and the remuneration of creators at its heart.
I hope that the gracious Speech will lead us into an exciting new digital area where Britain is a leader in the safe, transparent development and rollout of a digital revolution across the world.