National AI Strategy and UNESCO AI Ethics Framework Debate
Full Debate: Read Full DebateJim Shannon
Main Page: Jim Shannon (Democratic Unionist Party - Strangford)Department Debates - View all Jim Shannon's debates with the Department for Science, Innovation & Technology
(1 year, 7 months ago)
Commons ChamberI am grateful, Mr Deputy Speaker, that this Adjournment debate on the regulation of artificial intelligence has been granted. I declare my interest as set out in the Register of Members’ Financial Interests.
Britain is at a turning point. Having left the European Union, irrespective of what people thought about that decision, we have decided to go it alone. This new chapter in the long history of our great nation is starting to unfold, and we have a number of possible destinations ahead. We stand here today as a country with great challenges and an identity crisis: what is modern Britain to become? Our economy is, at best, sluggish; at worst, it is in decline. Our public services are unaffordable, inefficient and not delivering the quality of service the public should expect. People see and feel those issues right across the country: in their pay packets, in the unfilled vacancies at work, and in their local schools, GP surgeries, dentists, hospitals and high streets. All of this is taking place in a quickly changing world in which Britain is losing influence and control, and for hostile actors who wish Britain—or the west more broadly—harm, those ruptures in the social contract present an opportunity to exploit.
Having left the European Union, I see two destinations ahead of us: we can either keep doing what we are doing, or modernise our country. If we take the route to continuity, in my view we will continue to decline. There will be fewer people in work, earning less than they should be and paying less tax as a consequence. There will be fewer businesses investing, meaning lower profits and, again, lower taxes. Income will decline for the Treasury, but with no desire to increase the national debt for day-to-day spending, that will force us to take some very difficult decisions. It will be a world in which Britain is shaped by the world, instead of our shaping it in our interests.
Alternatively, we can decide to take the route to modernity, where workers co-create technology solutions at work to help them be more productive, with higher pay as a consequence; where businesses invest in automation and innovation, driving profits and tax payments to the Treasury; where the Government take seriously the need for reform and modernisation of the public sector, using technology to individualise and improve public services while reducing the cost of those services; and where we equip workers and public servants with the skills and training to seize the opportunities of that new economy. It will be a modern, innovative Britain with a modern, highly effective public sector, providing leadership in the world by leveraging our strengths and our ability to convene and influence our partners.
I paint those two pictures—those two destinations: continuity or modernity—for a reason. The former, the route to continuity, fails to seize the opportunities that technological reforms present us with, but the latter, the route to modernity, is built on the foundations of that new technological revolution.
This debate this evening is about artificial intelligence. To be clear, that is computers and servers, not robots. Artificial intelligence means, according to Google,
“computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyse.”
These AI machines can be categorised in four different ways. First, reactive machines have a limited application based on pre-programmed rules. These machines do not use memory or learn themselves. IBM’s Deep Blue machine, which beat Garry Kasparov at chess in 1997, is an example. Secondly, limited memory machines use memory to learn over time by being trained using what is known as a neural network, which is a system of artificial neurons based on the human brain. These AI machines are the ones we are used to using today. Thirdly, theory of mind machines can emulate the human mind and take decisions, recognising and remembering emotions and reacting in social situations like a human would. Some argue that these machines do not yet exist, but others argue that AI such as ChatGPT, which can interact with a human in a humanlike way, shows that we are on the cusp of a theory of mind machine existing. Fourthly, self-aware machines are machines that are aware of their own existence and have the same or better capabilities than those of a human. Thankfully, as far as I am aware, those machines do not exist today.
That all might be interesting for someone who is into tech, but why am I putting it on the public record today? I am doing so because there are a number of risks that we as a Parliament and the Government must better understand, anticipate and mitigate. These are the perils on our journey to continuity or modernity. Basic artificial intelligence, which helps us to find things on the internet or to book a restaurant, is not very interesting. The risk is low. More advanced artificial intelligence, which can perform the same tasks as a junior solicitor, a journalist or a student who is supposed to complete their homework or exam without the assistance of AI, presents a problem. We already see the problems faced by workers who have technology thrust upon them, instead of being consulted about its use. The consequences are real today and carry medium risks—they are disruptive.
Then we have the national security or human rights-level risks, such as live facial recognition technologies that inaccurately identify someone as a criminal, or a large language model that can help a terrorist understand how to build a bomb or create a novel cyber-security risk, or systems that can generate deepfake videos, photos or audio of politicians saying or doing things that are not true to interfere with elections or to create fake hostage recordings of someone’s children.
I commend the hon. Gentleman on bringing this debate forward. It is a very deep subject for the Adjournment debate, but it is one that I believe is important. Ethics must be accounted for to ensure that any industries using AI are kept safe. One issue that could become increasingly prominent is the risk of cyber-threats, which he referred to, and hacking, which not even humans can sometimes prevent. Does he agree that it is crucial that our Government and our Minister undertake discussions with UNESCO, for example, to ensure that any artificial intelligence that is used within UK industry is assessed, so as to deal with the unwanted harms as well as the vulnerabilities to attack to ensure that AI actors are qualified to deal with such exposure to cyber-attacks? In other words, the Government must be over this issue in its entirety.
The hon. Member is of course right. In the first part of his intervention, he alluded to the risk I have just been referring to, where machines can automatically create, for example, novel cyber-risks in a way that the humans who created those systems might not fully understand and that are accessible to a wider range of actors. That is a high risk that is either increasingly real today or is active and available to those who wish to do us harm.
The question, therefore, is what should we in Parliament do about it? Of course, we want Britain to continue to be one of the best places in the world to research and innovate, and to start up and scale up a tech business. We should also want to transform our public services and businesses using that technology, but we must—absolutely must—make sure that we create the conditions for this to be achieved in a safe, ethical and just way, and we must reassure ourselves that we have created those conditions before any of these high-risk outcomes take place, not in the aftermath of a tragedy or scandal.
That is why I have been so pleased to work with UNESCO, as the hon. Gentleman mentioned, and assistant director general Gabriela Ramos over the past few years, on the UNESCO AI ethics framework. This framework, the first global standard on AI ethics, was adopted by all 193 member states of the United Nations in 2021, including the United Kingdom. Its basis in human rights, actionable policies, readiness assessment methodology and ethical impact assessments provides the basis for the safe and ethical adoption of AI across countries. I therefore ask the Minister, in summing up, to update the House on how the Government are implementing their commitments from the 2021 signing of the AI ethics framework.
As crucial as the UNESCO AI ethics framework is, in my view the speed of innovation requires two more things from Government: first, enhanced intergovernmental co-ordination, and secondly, innovation in how we in this House pass laws to keep up with the speed of innovation. I will take each in turn.
First, on enhanced intergovernmental co-ordination, I wrote to the Government at the end of April calling on Ministers to play more of a convening role on the safe and secure testing of the most advanced AI, primarily with Canada, the United States and—in so far as it can be achieved—China, because those countries, alongside our own, are where the most cutting-edge companies are innovating in this space. I was therefore pleased to see in the Hiroshima communiqué from last week’s G7 a commitment to
“identify potential gaps and fragmentation in global technology governance”.
As a parliamentary lead at the OECD global parliamentary network on AI, I also welcome the request that the OECD and the Global Partnership on Artificial Intelligence establish the Hiroshima AI process, specifically in respect of generative AI, by the end of this year.
I question, however, whether these existing fora can build the physical or digital intergovernmental facilities required for the safe and secure testing of advanced AI that some have called for, and whether such processes will adequately supervise or have oversight of what is taking place in start-ups or within multinational technology companies. I therefore ask the Minister to address these issues and to provide further detail about the Hiroshima AI process and Britain’s contribution to the OECD and GPAI, which I understand has not been as good as it should have been in recent years.
I also welcome the engagement of the United Nations’ tech envoy on this issue and look forward to meeting him at the AI for Good summit in Geneva in a few weeks’ time. In advance of that, if the Minister is able to give it, I would welcome his assessment of how the British Government and our diplomats at the UN are engaging with the Office of the Secretary-General’s Envoy on Technology, and perhaps of how they wish to change that in the future.
Secondly, I want to address the domestic situation here in the UK following the recent publication of the UK’s AI strategy. I completely agree with the Government that we do not want to regulate to the extent where the UK is no longer a destination of choice for businesses to research and innovate, and to start up and scale up their business. An innovation-led approach is the right approach. I also agree that, where we do regulate, that regulation must be flexible and nimble to at least try to keep up with the pace of innovation. We only have to look at the Online Safety Bill to learn how slow we can be in this place at legislating, and to see that by the time we do, the world has already moved on.
Where I disagree is that, as I understand it, Ministers have decided that an innovation-led approach to regulation means that no new legislation is required. Instead, existing regulators—some with the capacity and expertise required, but most without—must publish guidance. That approach feels incomplete to me. The European Union has taken a risk-based approach to regulation, which is similar to the way I described high, medium and low-risk applications earlier. However, we have decided that no further legislative work is required while, as I pointed out on Second Reading of the Data Protection and Digital Information (No. 2) Bill, deregulating in other areas with consequences for the application of consumer and privacy law as it relates to AI. Surely, we in this House can find a way to innovate in order to draft legislation, ensure effective oversight and build flexibility for regulatory enforcement in a better way than we currently do. The current approach is not fit for purpose, and I ask the Minister to confirm whether the agreement at Hiroshima last week changes that position.
Lastly, I have raised my concerns with the Department and the House before about the risk of deepfake videos, photo and audio to our democratic processes. It is a clear and obvious risk, not just in the UK but in the US and the European Union, which also have elections next year. We have all seen the fake picture of the Pope wearing a white puffer jacket, created by artificial intelligence. It was an image that I saw so quickly whilst scrolling on Twitter that I thought it was real until I stopped to think about it.
Automated political campaign videos, fake images of politicians being arrested, deepfake videos of politicians giving speeches that never happened, and fake audio recordings are already available. While they may not all be of perfect quality just yet, we know how the public respond to breaking news cycles on social media. Many of us look at the headlines or the fake images over a split second, register that something has happened, and most of the time assume it to be true. That could have wide-ranging implications for the integrity of our democratic processes. I am awaiting a letter from the Secretary of State, but I am grateful for the response to my written parliamentary question today. I invite the Minister to say more on that issue now, should he be able to do so.
I am conscious that I have covered a wide range of issues, but I hope that illustrates the many and varied questions associated with the regulation of artificial intelligence, from the mundane to the disruptive to the risk to national security. I welcome the work being done by the Chair of the Science, Innovation and Technology Committee on this issue, and I know that other Committees are also considering looking at some of these questions. These issues warrant active and deep consideration in this Parliament, and Britain can provide global leadership in that space. Only today, OpenAI, the creator of ChatGPT, called for a new intergovernmental organisation to have oversight of high-risk AI developments. Would it not be great if that organisation was based in Britain?
If we get this right, we can take the path to modernity and create a modern Britain that delivers for the British people, is equipped for the future, and helps shape the world in our interests. If we get it wrong, or if we pick the path to continuity, Britain will suffer further decline and become even less in control of its future. Mr Deputy Speaker, I pick the path to modernity.