King’s Speech Debate
Full Debate: Read Full DebateViscount Colville of Culross
Main Page: Viscount Colville of Culross (Crossbench - Excepted Hereditary)Department Debates - View all Viscount Colville of Culross's debates with the Department for Science, Innovation & Technology
(1 year, 1 month ago)
Lords ChamberMy Lords, I was encouraged by the Government’s White Paper on AI published earlier this year, with its stated intention of extending AI regulation throughout the economy.
The gracious Speech mentioned the UK leading international discussion on developing AI safely, and of course much has been made of the Bletchley AI summit. While that was happening, I joined a lower-profile but equally interesting fringe AI summit, attended by hundreds of young AI developers, ethicists and policy thinkers from all over the world. They pointed out that AI is already deployed across multiple government systems and the private sector to increase productivity and effectiveness in the areas of medicine, science and employment, to mention a few.
Although the Government have had success in bringing together disparate players to look ahead at AI, the real threat is already here, as my noble friend Lady Kidron said. The National Cyber Security Centre’s annual review highlights these threats only too well. It says that the
“large language models … will almost certainly … make the spread of disinformation easier; and that deepfake campaigns are likely to become more advanced”
by the time of the next general election.
Generative AI can already make videos of people saying anything in a convincing iteration of their voice and features. At the moment, AI models to simulate somebody’s voice on audio are freely available; their video equivalent is available commercially but will be freely available within months. It will soon be possible to make videos of anybody saying anything and spread them across the internet. The possibility of this technology being used to make deepfake political statements will cause havoc in the coming elections, leaving voters not knowing what to believe.
When I asked the Minister, on 24 October, what could be done to ban them, he told me that it was not possible because they are developed abroad. However, in the Online Safety Act, the law now requires foreign players to abide by our requirements to prevent online harm to children and illegal harms to all. I echo other noble Lords in pointing out that there is no legislation in the gracious Speech to show that the Government are taking this very present threat at all seriously.
Any new AI law needs to ensure that this country is pro innovation in the hope of making us a global superpower. The White Paper laid out important principles for understanding how models should being developed; however, the successful development of AI in Britain will depend on effective regulation across every sector of the economy where the models are being trained—and before they are deployed.
The White Paper says that existing regulators could be given extra powers to deal with AI. That is fine for areas of the economy which already have strong regulation, such as medicine, science and air travel, but AI is already being deployed in sectors such as education and employment, where the regulators do not have the powers or resources to look at the ways in which it is being used in their fields. For instance, who is the employment regulator who can look at the possible bias in robo-firing from companies and the management of employees by algorithm? Can the Minister say why no AI legislation is being brought forward in this new Parliament? What plans are there to enhance the AI-regulating powers in sectors where such regulation is very weak?
I am concerned not just by the way that AI models are being trained but by the state of the data being used to train them. Public trust is crucial to the way this data is used, and the new legislation must help build that trust, so it will be very important to get the digital protection Bill right. Generative AI needs to be trained on millions, if not billions, of pieces of data, which, unless scrutinised rigorously, can be biased or—worse—toxic, racist and sexist. The developers using data to train AI models must be alert to what can happened if bad data is used; it will have a terrible impact on racial, gender, socioeconomic and disabled minorities.
I am concerned that crucial public trust in technology will be damaged by some of the changes to data protection set out in the Bill. I fear that the new powers available to the Secretary of State to issue instructions and to set out strategic priorities to the regulator will weaken its independence. Likewise, the reduction in data protection officers and the impact statements for all data that is not high risk also threaten to damage public trust in the AI models on which they are trained.
We see this technology evolving so fast that the Government must increase the focus on the ethics and transparency of the use of data. The Government should encourage senior members of organisations to look at the whole area of data, from data assurance to its exploitation by AI. I know that the Government want to reduce the regulatory burden on data management for businesses, but I suggest that it will also make them more competitive if they have a reputation for good control and use of the data they hold.
So much of our concern in the digital space is with the extraordinary powers that the tech companies have accumulated. I am very pleased that the digital markets Bill is giving the Digital Markets Unit powers to set up market inquiries into anti-competitive practices that harm UK markets. Building on what was said by the noble Baroness, Lady Stowell, for me one of the most egregious, and one of the most urgent, issues is the market in journalistic content. Spending on advertising for regional newspapers in this country has declined from £2.6 billion in 1990 to £240 million at the end of last year. As a result, the country’s newspapers are closing and journalism is being restricted by the move of advertising to the big tech companies. That is the real problem—it is not, as the noble Lord, Lord Black, said, caused by competition from the BBC.
This is compounded by those companies aggregating news content generated by news creators and not paying them a fair price. So I am pleased to see the introduction of a means for tech companies to make proportionate payment for journalistic content. I hope that the conduct requirement to trade on fair terms will be sufficient. However, if the final-offer mechanism has to be used to force an offer from the tech companies, the lengthy enforcement period means that it could take many years before the CMA is able to deploy it. There need to be strict time limits on every step if we are to make the FOM a credible incentive to negotiation. Like the noble Baroness, Lady Stowell, I ask for the CMA decisions to be appealable through the shorter judicial review process, rather than the longer merits standards asked for by the tech companies.
Finally, I am very pleased by the clauses to help digital subscribers, but I will make one plea. It is often very difficult to terminate a contract online with the “unsubscribe” link being hidden away in some digital corner. I suggest that we take the example of Germany, which requires contracts to have an easily accessible cancellation button on all digital contracts.