4 Darren Jones debates involving the Department for Science, Innovation & Technology

Artificial Intelligence

Darren Jones Excerpts
Thursday 29th June 2023

(10 months, 1 week ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- View Speech - Hansard - -

Thank you, Mr Deputy Speaker. I am Chair of the Business and Trade Committee, but if there is an AI Committee I am certainly interested in serving on it. I declare my interest, as set out in the Register of Members’ Financial Interests, and I thank the hon. Member for Boston and Skegness (Matt Warman) and the Backbench Business Committee for organising and agreeing to this important debate.

I will make the case for the Government to be more involved in the technology revolution, and explain what will happen if we leave it purely to the market. It is a case for a technology revolution that works in the interests of the British people, not against our interests. In my debate on artificial intelligence a few weeks ago, I painted a picture of the type of country Britain can become if we shape the technology revolution in our interests. It is a country where workers are better paid, have better work and more time off. It is a country where public servants have more time to serve the public, with better access and outcomes from our public services, at reduced cost to the taxpayer. It is a country where the technological revolution is seen as an exciting opportunity for workers and businesses alike—an opportunity to learn new things, improve the quality of our work, and create an economy that is successful, sustainable, and strong.

I also warned the House about the risks of the technology revolution if we merely allow ourselves to be shaped by it. That is a country where technology is put upon people, instead of being developed with them, and where productivity gains result in economic growth and higher profits, but leave workers behind with reduced hours or no job at all. It is where our public services remain in the analogue age and continue to fail, with increased provision from the private sector only for those who can afford it. It is a world in which the pace of innovation races ahead of society, creatively destroying the livelihoods of many millions of people, and where other countries leap ahead of our own, as we struggle to seize the economic opportunities of the technology revolution for our own economy, and through the potential for exports to support others.

The good news is that we are only really at the start of that journey, and we can shape the technology revolution in our interests if we choose to do so. But that means acting now. It means remembering, for all our discussions about artificial intelligence and computers, that we serve the people. It means being honest about the big questions that we do not yet have answers to. It is on some of those big questions that I will focus my remarks. That is not because I have fully formed answers to all of them at this stage, but because I think it important to put those big questions on the public record in this Parliament.

The big questions that I wish to address are these: how do we maintain a thriving, innovative economy for the technology sector; how can we avoid the risk of a new age of inequality; how can we guarantee the availability of work for people across the country; and how can we balance the power that workers have, and their access to training and skills? Fundamental to all those issues is the role and capacity of the state to support people in the transition.

We will all agree that creating a thriving, innovative economy is a good idea, and we all want Britain to be the go-to destination for investment, research and innovation. We all want the British people, wherever they are from and from whatever background, to know that if they have an idea, they can turn it into a successful business and benefit from it. As the hon. Member for Boston and Skegness alluded to, that means getting the balance right between regulation and economic opportunity, and creating the services that will support people in that journey. Ultimately, it means protecting the United Kingdom’s status as a great place to invest, start, and scale up technology businesses.

Although we are in a relatively strong position today, we risk falling behind quickly if we do not pay attention. In that context, the risk of a new age of inequality is perhaps obvious. If the technology revolution is an extractive process, where big tech takes over the work currently done by humans and restricts the access to markets needed by new companies, power and wealth will be taken from workers and concentrated in the already powerful, wealthy and largely American big-tech companies. I say that not because I am anti-American or indeed anti-big tech, but because it is our job to have Britain’s interest at the front of our minds.

Will big tech pick up the tab for universal credit payments to workers who have been made redundant? Will it pay for our public services in a situation where fewer people are in work paying less tax? Of course not. So we must shape this process in the interests of the British people. That means creating inclusive economic opportunities so that everybody can benefit. For example, where technology improves productivity and profits, workers should benefit from that with better pay and fewer days at work. Where workers come up with innovative ideas on how to use artificial intelligence in their workplace, they should be supported to protect their intellectual property and start their own business.

The availability of work is a more difficult question, and it underpins the risk of a new age of inequality. For many workers, artificial intelligence will replace the mundane and the routine. It can result in human workers being left with more interesting and meaningful work to do themselves. But if the productivity gains are so significant, there is conceivably a world in which we need fewer human workers than we have today. That could result in a four-day week, or even fewer days than that, with work being available still for the majority of people. The technology revolution will clearly create new jobs—a comfort provided to us by the history of previous industrial revolutions. However, that raises two questions, which relate to my next point about the power of workers and their access to training and skills.

There are too many examples today of technology being put upon workers, not developed with them. That creates a workplace culture that is worried about surveillance, oppression, and the risk of being performance managed or even fired by an algorithm. That must change, not just because it is the right thing to do but because, I believe, it is in the interests of business managers and owners for workers to want to use these new technologies, as opposed to feeling oppressed by them. On training, if someone who is a worker today wants to get ahead of this revolution, where do they turn? Unless they work in a particularly good business, the likelihood is that they have no idea where to go to get access to such training or skill support. Most people cannot just give up their job or go part time to complete a higher education course, so how do we provide access to free, relevant training that workers are entitled to take part in at work? How does the state partner with business to co-create and deliver that in the interests of our country and the economy? The role of the Government in this debate is not about legislation and regulation; it is about the services we provide, the welfare state and the social contract.

That takes me to my next point: the role and capacity of the Government to help people with the technology transition. Do we really think that our public services today are geared towards helping people benefit from what will take place? Do we really believe our welfare system is fit for purpose in helping people who find themselves out of work? Artificial intelligence will not just change the work of low-paid workers, who might just be able to get by on universal credit; it will also affect workers on middle and even higher incomes, including journalists, lawyers, creative sector workers, retail staff, public sector managers and many more. Those workers will have mortgages or rents to pay, and universal credit payments will go nowhere near covering their bills. If a significant number of people in our country find themselves out of work, what will they do? How will the Government respond? The system as it is designed today is not fit for that future.

I raise those questions not because I have easy answers to them, but because the probability of those outcomes is likely. The severity of the problem will be dictated by what action we take now to mitigate those risks. In my view, the state and the Government must be prepared and must get themselves into a position to help people with the technology transition. There seems now to be political consensus about the opportunities of the technology revolution, and I welcome that, but the important unanswered question is: how? We cannot stop this technology revolution from happening. As I have said, we either shape it in our interests or face being shaped by it. We can sit by and watch the market develop, adapt and innovate, taking power and wealth away from workers and creating many of the problems I have explained today, leaving the Government and our public services to pick up the pieces, probably without sufficient resources to do so. Alternatively, we can decide today how this technology revolution will roll out across our country.

I was asked the other day whether I was worried that this technology-enabled future would create a world of despair for my children. My answer was that I am actually more worried about the effects of climate change. I say that because we knew about the causes and consequences of climate change in the 1970s, but we did nothing about it. We allowed companies to extract wealth and power and leave behind the damage for the public to pick up. We are now way behind where we need to be, and we are actively failing to turn it around, but with this technology revolution, we have an opportunity in front of us to show the public that a different, more hopeful future is possible for our country—a country filled with opportunity for better work, better pay and better public services. Let us not make the same mistakes as our predecessors in the 1970s, and let us not be trapped in the current debate of doom and despair for our country, even though there are many reasons to feel like that.

Let us seize this opportunity for modernisation and reform, remembering that it is about people and our country. We can put the technology revolution at the heart of our political agenda and our vision for a modern Britain with a strong, successful and sustainable economy. We can have a technology revolution that works in the interests of the British people and a Britain that is upgraded so that it works once again. However, to shape the technology revolution in our interests, that work must start now.

--- Later in debate ---
Dawn Butler Portrait Dawn Butler (Brent Central) (Lab)
- Hansard - - - Excerpts

It is a pleasure to follow the Chair of the Science, Innovation and Technology Committee, of which I am a member. I congratulate the hon. Member for Boston and Skegness (Matt Warman) on securing the debate and the Backbench Business Committee on allowing it.

I used to be a computer programmer, so I like tech— I need to put that out there. This debate on AI follows the categories of three types of people. I compare those three types of people to some friends that I used to go out partying with. One would get completely wasted; one would just sit in the corner and not want to get involved; and I would be the driver, so I would stay sober but enjoy the party. In this debate I like to think that I am staying sober, enjoying the party and making sure that everyone, myself included, gets home safely.

Darren Jones Portrait Darren Jones
- Hansard - -

Could I invite my hon. Friend to inform the House which of us in this debate is the one who got wasted?

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

I do not think my friend would look kindly on me if I said their name. I always made sure they got home safely—that is the main thing.

My hon. Friend the Member for Streatham (Bell Ribeiro-Addy) told me the other day that she asked ChatGPT to write a letter from her to constituents on the dangers of AI. This is what ChatGPT churned out:

“Dear [Constituent],

Thank you for reaching out to me with your concerns about large language model AI. I share your worries about the potential risks that this technology poses and I appreciate the opportunity to address them.

As you may be aware, large language model AI is an advanced form of artificial intelligence that can generate human-like text by training on vast amounts of data. While this technology has shown tremendous potential in various fields, including language translation, chatbots, and content creation, it also presents significant ethical, social, and legal challenges.

One of the main concerns about large language model AI is its potential to perpetuate biases and discrimination. If the data used to train these models are biased, the AI will learn and replicate those biases in its output, leading to harmful outcomes. Moreover, these models have the ability to generate highly realistic but entirely fake text, which could be used to spread disinformation, manipulate public opinion, and even harm individuals.

Therefore, I believe that the regulation and governance of large language model AI are essential to ensure that it is used ethically and responsibly. The UK government has been actively engaging with the tech industry, academia, and civil society to understand the implications of AI and develop a regulatory framework that promotes innovation while safeguarding human rights and values.”

That, to me, is an illustration of the fact that even AI itself recognises that there are some issues with AI. It shows that we need clear regulation, which we do not quite have at the moment. There is still time for the Government’s White Paper to change that, and I hope that debates of this kind will enable change to happen.

Many Members have referred to the use of AI for medical advances, and quantum computers will certainly enable medicines and medical solutions to be found much more quickly. However, as I said when evidence was being given to the Science, Innovation and Technology Committee, even something as simple as body mass index, which is used in the medical world, is a flawed measurement. The use of BMI in the building of AI will integrate that bias into anything that the AI produces. Members may not be aware that the BMI scale was created not by a doctor but by an astronomer and mathematician in the 1800s. What he was trying to do was identify l’homme moyen—the average man—in statistical terms. The scale was never meant to be used in the medical world in the way that it is. People can be prevented from having certain medical procedures if their BMI is too high. The Committee was given no evidence that we would rule out, or mitigate, a flawed system such as BMI in the medical profession and the medical world. We should be worried about this, because in 10 or 20 years’ time it will be too late to explain that BMI was always discriminatory against women, Asian men and black people. It is important for us to get this right now.

I recognise the huge benefits that AI can have, but I want to stress the need to stay sober and recognise the huge risks as well. When we ask certain organisations where they get their data from, the response is very opaque: they do not tell us where they are getting their data from. I understand that some of them get their mass data scraping from sites such as Reddit, which is not really where people would go to become informed on many things.

If we do not take this seriously, we will be automating discrimination. It will become so easy just to accept what the system is telling us, and people who are already marginalised will become further marginalised. Many, if not most, AI-powered systems have been shown to contain bias, whether against people of colour, women, people with disabilities or those with other protected characteristics. For instance, in the case of passport applications, the system keeps on saying that a person’s eyes are closed when in fact they have a disability. We must ensure that we measure the impact on the public’s rights and freedoms alongside the advances in AI. We cannot become too carried away—or drunk—with all the benefits, without thinking about everything else.

At the beginning, I thought it reasonable for the Government to say, “We will just expand legislation that we already have,” but when the Committee was taking evidence, I realised that we need to go a great deal further—that we need something like a digital Bill of Rights so that people understand and know their rights, and so that those rights are protected. At the moment, that is not the case.

There was a really stark example when we heard some information in regard to musicians, music and our voices. Our voices are currently not protected, so with the advancements of deepfake, anybody in this House can have their voice attached to something using deepfake and we would have no legal recourse, because at the moment our voices are not protected. I believe that we need a digital Bill of Rights that would outlaw the most dangerous uses of AI, which should have no place in a real democracy.

The Government should commit to strengthening the rights of the public so that they know what is AI-generated or whether facial recognition—the digital imprint of their face—is being used in any way. We know, for instance, that the Met police have on file millions of people’s images—innocent people—that should not be there. Those images should be taken off the police database. If an innocent person’s face is on the database and, at some point, that is put on a watch list, the domino effect means that they could be accused of doing something they have not done.

The UK’s approach to AI currently diverges from that of our closest trading partners, and I find that quite strange. It is not a good thing and there is an apparent trade-off between progress and safety. I think we should always err on the side of safety and ethics. Progress will always happen; we cannot stop progress. Companies will always invest in AI. It is the future, so we do not have to worry about that—people will run away with that. What we have to do is ensure that we protect people’s safety, because otherwise, instead of being industry leaders in the UK, we will be known as the country that has shoddy or poor practices. Nobody really wants that.

There are countries that are outlawing how facial recognition is used, for instance, but we are not doing that in the UK, so we are increasingly looking like the outlier in this discussion and protection around AI. Government’s first job is to protect their citizens, so we should protect citizens now from the dangers of AI.

Harms are already arising from AI. The Government’s recently published White Paper takes the view that strong, clear protections are simply not needed. I think the Government are wrong on that. Strong, clear protections are most definitely needed—and needed now. Even if the Government just catch up with what is happening in Europe and the US, that would be more than we are doing at the moment. We need new, legally binding regulations.

The White Paper currently has plans to water down data rights and data protection. The Data Protection and Digital Information (No. 2) Bill paints an alarming picture. It will redefine what counts as personal data. All these things have been put in place piecemeal to ensure that personal data is protected. If we lower the protection in the definition of what is personal data, that will mean that any company can use our personal data for anything it wants and we will have very limited recourse to stop that. At the end of the day, our personal data is ultimately what powers many AI systems, and it will be left ripe for exploitation and abuse. The proposals are woefully inadequate.

The scale of the challenge is vast, but instead of reining in this technology, the Government’s approach is to let it off the leash, and that is problematic. When we received evidence from a representative from the Met police, she said that she has nothing to hide so what is the problem, for instance, in having the fingerprint, if you like, of her face everywhere that she goes? I am sure that we all have either curtains or blinds in our houses. If we are not doing anything illegal, why have curtains or blinds? Why not just let everyone look into our house? Most abuse happens in the home so, by the same argument, surely allowing everyone to look into each other’s houses would eliminate a lot of abuse.

In our country we have the right to privacy, and people should have that right. Our digital fingerprints should not be taken without our consent, as we have policing by consent. The Met’s use of live facial recognition and retrospective facial recognition is worrying. I had a meeting with Mark Rowley the other day and, to be honest, he did not really understand the implications, which is a worry.

Like many people, I could easily get carried away and get drunk with this AI debate, but I am the driver. I need to stay sober to make sure everyone gets home safely.

National AI Strategy and UNESCO AI Ethics Framework

Darren Jones Excerpts
Monday 22nd May 2023

(11 months, 2 weeks ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- Hansard - -

I am grateful, Mr Deputy Speaker, that this Adjournment debate on the regulation of artificial intelligence has been granted. I declare my interest as set out in the Register of Members’ Financial Interests.

Britain is at a turning point. Having left the European Union, irrespective of what people thought about that decision, we have decided to go it alone. This new chapter in the long history of our great nation is starting to unfold, and we have a number of possible destinations ahead. We stand here today as a country with great challenges and an identity crisis: what is modern Britain to become? Our economy is, at best, sluggish; at worst, it is in decline. Our public services are unaffordable, inefficient and not delivering the quality of service the public should expect. People see and feel those issues right across the country: in their pay packets, in the unfilled vacancies at work, and in their local schools, GP surgeries, dentists, hospitals and high streets. All of this is taking place in a quickly changing world in which Britain is losing influence and control, and for hostile actors who wish Britain—or the west more broadly—harm, those ruptures in the social contract present an opportunity to exploit.

Having left the European Union, I see two destinations ahead of us: we can either keep doing what we are doing, or modernise our country. If we take the route to continuity, in my view we will continue to decline. There will be fewer people in work, earning less than they should be and paying less tax as a consequence. There will be fewer businesses investing, meaning lower profits and, again, lower taxes. Income will decline for the Treasury, but with no desire to increase the national debt for day-to-day spending, that will force us to take some very difficult decisions. It will be a world in which Britain is shaped by the world, instead of our shaping it in our interests.

Alternatively, we can decide to take the route to modernity, where workers co-create technology solutions at work to help them be more productive, with higher pay as a consequence; where businesses invest in automation and innovation, driving profits and tax payments to the Treasury; where the Government take seriously the need for reform and modernisation of the public sector, using technology to individualise and improve public services while reducing the cost of those services; and where we equip workers and public servants with the skills and training to seize the opportunities of that new economy. It will be a modern, innovative Britain with a modern, highly effective public sector, providing leadership in the world by leveraging our strengths and our ability to convene and influence our partners.

I paint those two pictures—those two destinations: continuity or modernity—for a reason. The former, the route to continuity, fails to seize the opportunities that technological reforms present us with, but the latter, the route to modernity, is built on the foundations of that new technological revolution.

This debate this evening is about artificial intelligence. To be clear, that is computers and servers, not robots. Artificial intelligence means, according to Google,

“computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyse.”

These AI machines can be categorised in four different ways. First, reactive machines have a limited application based on pre-programmed rules. These machines do not use memory or learn themselves. IBM’s Deep Blue machine, which beat Garry Kasparov at chess in 1997, is an example. Secondly, limited memory machines use memory to learn over time by being trained using what is known as a neural network, which is a system of artificial neurons based on the human brain. These AI machines are the ones we are used to using today. Thirdly, theory of mind machines can emulate the human mind and take decisions, recognising and remembering emotions and reacting in social situations like a human would. Some argue that these machines do not yet exist, but others argue that AI such as ChatGPT, which can interact with a human in a humanlike way, shows that we are on the cusp of a theory of mind machine existing. Fourthly, self-aware machines are machines that are aware of their own existence and have the same or better capabilities than those of a human. Thankfully, as far as I am aware, those machines do not exist today.

That all might be interesting for someone who is into tech, but why am I putting it on the public record today? I am doing so because there are a number of risks that we as a Parliament and the Government must better understand, anticipate and mitigate. These are the perils on our journey to continuity or modernity. Basic artificial intelligence, which helps us to find things on the internet or to book a restaurant, is not very interesting. The risk is low. More advanced artificial intelligence, which can perform the same tasks as a junior solicitor, a journalist or a student who is supposed to complete their homework or exam without the assistance of AI, presents a problem. We already see the problems faced by workers who have technology thrust upon them, instead of being consulted about its use. The consequences are real today and carry medium risks—they are disruptive.

Then we have the national security or human rights-level risks, such as live facial recognition technologies that inaccurately identify someone as a criminal, or a large language model that can help a terrorist understand how to build a bomb or create a novel cyber-security risk, or systems that can generate deepfake videos, photos or audio of politicians saying or doing things that are not true to interfere with elections or to create fake hostage recordings of someone’s children.

Jim Shannon Portrait Jim Shannon (Strangford) (DUP)
- Hansard - - - Excerpts

I commend the hon. Gentleman on bringing this debate forward. It is a very deep subject for the Adjournment debate, but it is one that I believe is important. Ethics must be accounted for to ensure that any industries using AI are kept safe. One issue that could become increasingly prominent is the risk of cyber-threats, which he referred to, and hacking, which not even humans can sometimes prevent. Does he agree that it is crucial that our Government and our Minister undertake discussions with UNESCO, for example, to ensure that any artificial intelligence that is used within UK industry is assessed, so as to deal with the unwanted harms as well as the vulnerabilities to attack to ensure that AI actors are qualified to deal with such exposure to cyber-attacks? In other words, the Government must be over this issue in its entirety.

Darren Jones Portrait Darren Jones
- Hansard - -

The hon. Member is of course right. In the first part of his intervention, he alluded to the risk I have just been referring to, where machines can automatically create, for example, novel cyber-risks in a way that the humans who created those systems might not fully understand and that are accessible to a wider range of actors. That is a high risk that is either increasingly real today or is active and available to those who wish to do us harm.

The question, therefore, is what should we in Parliament do about it? Of course, we want Britain to continue to be one of the best places in the world to research and innovate, and to start up and scale up a tech business. We should also want to transform our public services and businesses using that technology, but we must—absolutely must—make sure that we create the conditions for this to be achieved in a safe, ethical and just way, and we must reassure ourselves that we have created those conditions before any of these high-risk outcomes take place, not in the aftermath of a tragedy or scandal.

That is why I have been so pleased to work with UNESCO, as the hon. Gentleman mentioned, and assistant director general Gabriela Ramos over the past few years, on the UNESCO AI ethics framework. This framework, the first global standard on AI ethics, was adopted by all 193 member states of the United Nations in 2021, including the United Kingdom. Its basis in human rights, actionable policies, readiness assessment methodology and ethical impact assessments provides the basis for the safe and ethical adoption of AI across countries. I therefore ask the Minister, in summing up, to update the House on how the Government are implementing their commitments from the 2021 signing of the AI ethics framework.

As crucial as the UNESCO AI ethics framework is, in my view the speed of innovation requires two more things from Government: first, enhanced intergovernmental co-ordination, and secondly, innovation in how we in this House pass laws to keep up with the speed of innovation. I will take each in turn.

First, on enhanced intergovernmental co-ordination, I wrote to the Government at the end of April calling on Ministers to play more of a convening role on the safe and secure testing of the most advanced AI, primarily with Canada, the United States and—in so far as it can be achieved—China, because those countries, alongside our own, are where the most cutting-edge companies are innovating in this space. I was therefore pleased to see in the Hiroshima communiqué from last week’s G7 a commitment to

“identify potential gaps and fragmentation in global technology governance”.

As a parliamentary lead at the OECD global parliamentary network on AI, I also welcome the request that the OECD and the Global Partnership on Artificial Intelligence establish the Hiroshima AI process, specifically in respect of generative AI, by the end of this year.

I question, however, whether these existing fora can build the physical or digital intergovernmental facilities required for the safe and secure testing of advanced AI that some have called for, and whether such processes will adequately supervise or have oversight of what is taking place in start-ups or within multinational technology companies. I therefore ask the Minister to address these issues and to provide further detail about the Hiroshima AI process and Britain’s contribution to the OECD and GPAI, which I understand has not been as good as it should have been in recent years.

I also welcome the engagement of the United Nations’ tech envoy on this issue and look forward to meeting him at the AI for Good summit in Geneva in a few weeks’ time. In advance of that, if the Minister is able to give it, I would welcome his assessment of how the British Government and our diplomats at the UN are engaging with the Office of the Secretary-General’s Envoy on Technology, and perhaps of how they wish to change that in the future.

Secondly, I want to address the domestic situation here in the UK following the recent publication of the UK’s AI strategy. I completely agree with the Government that we do not want to regulate to the extent where the UK is no longer a destination of choice for businesses to research and innovate, and to start up and scale up their business. An innovation-led approach is the right approach. I also agree that, where we do regulate, that regulation must be flexible and nimble to at least try to keep up with the pace of innovation. We only have to look at the Online Safety Bill to learn how slow we can be in this place at legislating, and to see that by the time we do, the world has already moved on.

Where I disagree is that, as I understand it, Ministers have decided that an innovation-led approach to regulation means that no new legislation is required. Instead, existing regulators—some with the capacity and expertise required, but most without—must publish guidance. That approach feels incomplete to me. The European Union has taken a risk-based approach to regulation, which is similar to the way I described high, medium and low-risk applications earlier. However, we have decided that no further legislative work is required while, as I pointed out on Second Reading of the Data Protection and Digital Information (No. 2) Bill, deregulating in other areas with consequences for the application of consumer and privacy law as it relates to AI. Surely, we in this House can find a way to innovate in order to draft legislation, ensure effective oversight and build flexibility for regulatory enforcement in a better way than we currently do. The current approach is not fit for purpose, and I ask the Minister to confirm whether the agreement at Hiroshima last week changes that position.

Lastly, I have raised my concerns with the Department and the House before about the risk of deepfake videos, photo and audio to our democratic processes. It is a clear and obvious risk, not just in the UK but in the US and the European Union, which also have elections next year. We have all seen the fake picture of the Pope wearing a white puffer jacket, created by artificial intelligence. It was an image that I saw so quickly whilst scrolling on Twitter that I thought it was real until I stopped to think about it.

Automated political campaign videos, fake images of politicians being arrested, deepfake videos of politicians giving speeches that never happened, and fake audio recordings are already available. While they may not all be of perfect quality just yet, we know how the public respond to breaking news cycles on social media. Many of us look at the headlines or the fake images over a split second, register that something has happened, and most of the time assume it to be true. That could have wide-ranging implications for the integrity of our democratic processes. I am awaiting a letter from the Secretary of State, but I am grateful for the response to my written parliamentary question today. I invite the Minister to say more on that issue now, should he be able to do so.

I am conscious that I have covered a wide range of issues, but I hope that illustrates the many and varied questions associated with the regulation of artificial intelligence, from the mundane to the disruptive to the risk to national security. I welcome the work being done by the Chair of the Science, Innovation and Technology Committee on this issue, and I know that other Committees are also considering looking at some of these questions. These issues warrant active and deep consideration in this Parliament, and Britain can provide global leadership in that space. Only today, OpenAI, the creator of ChatGPT, called for a new intergovernmental organisation to have oversight of high-risk AI developments. Would it not be great if that organisation was based in Britain?

If we get this right, we can take the path to modernity and create a modern Britain that delivers for the British people, is equipped for the future, and helps shape the world in our interests. If we get it wrong, or if we pick the path to continuity, Britain will suffer further decline and become even less in control of its future. Mr Deputy Speaker, I pick the path to modernity.

Oral Answers to Questions

Darren Jones Excerpts
Wednesday 3rd May 2023

(12 months ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Chloe Smith Portrait Chloe Smith
- View Speech - Hansard - - - Excerpts

I recognise the profound experience from which my hon. Friend speaks. We also recognise that many technologies can pose a risk when in the wrong hands. The UK is a global leader in AI, with a strategic advantage that places us at the forefront of these developments. Through UK leadership—at the OECD, the G7, the Council of Europe and more—we are promoting our vision for a global ecosystem that balances innovation and the use of AI, underpinned by our shared values of freedom, fairness and democracy. Our approach will be proportionate, pro-innovative and adaptable. Meanwhile, the integrated review refresh recognises the challenges that are posed by China.

Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- View Speech - Hansard - -

With elections under way and a general election due next year, people are rightly concerned about the fake videos, images and audio being created by artificial intelligence. Can the Secretary of State confirm to the House what actions her Department is taking to protect the integrity of our democratic processes in that context?

Chloe Smith Portrait Chloe Smith
- View Speech - Hansard - - - Excerpts

I welcome the hon. Gentleman’s involvement, and I look forward to debating these issues with him and others across the House. I can understand his concerns and the anxiety that sits behind his question. We have a fully developed regime of electoral law that already accounts for election offences such as false statements by candidates, but in addition to the existing regulations we are setting out an approach on AI that will look to regulators in different sectors to apply the correct guidance. We will also add a central co-ordinating function that will be able to seek out risks and deal with them flexibly, appropriately and proportionately.

Data Protection and Digital Information (No. 2) Bill

Darren Jones Excerpts
Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- View Speech - Hansard - -

I refer the House to my entry in the Register of Members’ Financial Interests.

The Bill has had a curious journey. It started life as the Data Protection and Digital Information Bill, in search of the exciting Brexit opportunities that we were promised, only to have died and then arisen as the Data Protection and Digital Information (No 2) Bill. In the Bill’s rejuvenated—and, dare I say, less exciting—form, Ministers have rightly clawed back some of the most high-risk proposals of its previous format, recognising, of course, that our freedom from the European Union, at least in respect of data protection, is anything but. We may have left the European Union, but data continues to flow between the EU and the United Kingdom, and that means of course that we must keep the European Commission happy to maintain our adequacy decision. For the most part, the Bill does not therefore represent significant change from the existing GDPR framework. There are some changes to paperwork and the appointment of officers, but nothing radical.

With that settled—at least in my view—the question is this: what is the purpose of this Bill? The Government aim to reduce regulatory burdens on business. To give Ministers credit, according to the independent assessment of the Regulatory Policy Committee, they have adequately set out how that will happen—unlike for other Government Bills in recent weeks. I congratulate the Government on their so-called “co-design” with stakeholders, which other Departments could learn from in drafting legislation. But the challenge in reducing business regulation and co-designing legislation with stakeholders is knowing how much of an influence the largest, most wealthy voices have over the smallest, least influential voices.

In this Bill—and, I suspect, in the competition Bill as its relates to the digital markets unit, and, if rumours are correct, the media Bill—that means the difference between the voice of big tech and the voice of the people. If reports are correct, I share concerns about the current influence of big tech specifically on Downing Street and about the amount of interference by No. 10 in the drafting of legislation in the Department. [Interruption.] Ministers are shaking their heads; I am grateful for the clarification. I am sure that the reporters at Politico are watching.

Research is a good example of a concern in the Bill relating to the balance between big tech and the people. When I was on the pre-legislative committee of the Online Safety Bill—on which I enjoyed working with the hon. Member for Folkestone and Hythe (Damian Collins), who spoke before me—everybody recognised the need for independent academics to have access to data from, the social media companies, for example, to help us understand the harms that can come from using social media. The Europeans have progressed that in their EU Digital Services Act, and even the Americans are starting to look at legislation in that area. But in the Bill, Ministers have not only failed to provide this access, but have opted instead to give companies the right to use our data to develop their own products. That means in practice that companies can now use the data they have on us to understand how to improve their products, primarily and presumably so that we use them more or—for companies that rely on advertising income—to increase our exposure to advertising, in order to create more profit for the company.

All that is, we are told, in the name of scientific research. That does not feel quite right to me. Why might Ministers have decided that that was necessary—a public policy priority—or that it is in any way in the interests of our constituents for companies to be able to do corporate research on product design without our explicit consent, instead of giving independent academics the right to do independent research about online harms, for example? The only conclusion I can come to is that it is because Ministers were, in the co-design process, asked by big tech to allow big tech to do that. I am not sure that consumers would have agreed, and that seems to be an example of big tech winning out in the Bill.

The second example relates to consumer rights and the ability of consumers to bring complaints and have them dealt with in a timely manner. Clause 7 allows for unreasonable delays by companies or data controllers, especially those that have the largest quantities of data on consumers. In practice, that once again benefits big tech, which holds the most data. The time that it can take to conclude a complaint under the Bill is remarkably long and will merely act as a disincentive to bringing a complaint in the first place.

It can take up to two months for a consumer or data subject to request access to the data that a company holds on them, then another two months for the company to confirm whether a complaint will be accepted. If a complaint is not accepted, there will then be up to another six months for the Information Commissioner to decide whether the complaint should be accepted, and if the Information Commissioner does decide that, the company then has one more month to provide the data, which was originally asked for nine months earlier. The consumer can then look at the data and put in a complaint to the company. If the company does not deal with the complaint, the earliest that the consumer can complain to the Information Commissioner is month 14, and the Information Commissioner will then have up to six months to resolve the complaint. All in all, that is up to 20 months of emails, forms, processes and decisions from multiple parties for an individual consumer to have a complaint considered and resolved.

That lengthy and complex complaints process also highlights the risks associated with the provisions in the Bill relating to automated decision making. Under current law, fully autonomous decision making is prohibited where it relates to a significant decision, but the Bill relaxes those requirements and ultimately puts the burden on a consumer to successfully bring a complaint against a company taking a decision about them in a wholly automated way. Will an individual consumer really do that when it could take up to 20 months? In the world we live in today, the likes of Chat GPT and other large language models will revolutionise customer service processes. The approach in the Bill seems to fail in regulating for the future and, unfortunately, deals with the past. I ask again: which stakeholder group asked the Government to draft the law in this complex and convoluted way? It certainly was not consumers.

In other regulated sectors and areas of law, such as consumer law, we allow representative bodies to bring what the Americans call “class actions” on behalf of groups of consumers whose rights have been infringed. That process is perfectly normal and exists in UK law today. Experience shows that representative bodies such as Citizens Advice and Which? do not bring class actions easily because it is too financially risky. They therefore bring an action only when there is a clear and significant breach. So why have Ministers not allowed for those powers to exist for breaches of data protection law in the same way that the European Union has, when we are very used to them existing in UK law? Again, that feels like another win for big tech and a loss for consumers. Reducing unnecessary compliance burdens on business is of course welcome, but the Government seem to have forgotten that data protection law is based on a foundation of protecting the consumer, not being helpful to business.

On a different subject, I highlight once again the ongoing creep of powers being taken from Parliament and given to the Executive. We have already heard about the powers for the Secretary of State to make amendments to the legislation without following a full parliamentary process. That keeps happening—not just in this Bill but in other Bills this Session, including the Online Safety Bill. My Committee, which has whole-of-Government scrutiny powers in relation to good regulation, has reprimanded the Department—albeit in its previous form—for the use of those Henry VIII powers. It is disappointing to see them in use again.

The Minister, in response to my hon. Friend the Member for Weaver Vale (Mike Amesbury), said that the Government had enhanced oversight of the Information Commissioner by giving themselves power to direct some of its legitimate interests or decisions, or the content of codes. I politely point out that the Information Commissioner regulates the Government’s use of our data. It seems odd to me that the Government alone are being given enhanced powers to scrutinise the Information Commissioner, and that Parliament has not been given additional oversight; that ought to be included.

The Government have yet to introduce any substantive legislation on biometrics. Biometric data is the most personal type of data, be it about our faces, our fingerprints, our voices or other characteristics that are personal to our bodies. The Bill does not even attempt to bring forward biometric-specific regulation. My private Member’s Bill in the 2019-21 Session—now the Forensic Science Regulator Act 2021—originally contained provisions for a biometrics strategy and associated regulations. At the then Minister’s insistence, I removed those provisions, having been told that the Government were drafting a more wide-ranging biometrics Bill, which we have not seen. That is especially important in the light of the Government’s artificial intelligence White Paper, as lots of AI is driven by biometric data. We have had some debate on the AI White Paper, but it warrants a whole debate, and I hope to secure a Westminster Hall debate on it soon. We need to fully understand the context of the AI White Paper as the Bill progresses through Committee and goes to the other place.

I am conscious that I have had an unusual amount of time, so I will finish by flagging two points, which I hope that the Parliamentary Under-Secretary of State for Science, Innovation and Technology will respond to in his summing-up. The first is the age-appropriate design code. I think that we all agree in this House that children should have more protection online than other users. The age-appropriate design code, which we all welcomed, is based on the foundation of GDPR. There are concerns that the changes in the Bill, including to the rights of the Secretary of State, could undermine the age-appropriate design code. I invite the Minister to reassure us, when he gets to the Dispatch Box, that the Government are absolutely committed to the current form of the age-appropriate design code, despite the changes in the Bill.

The last thing I invite the Minister to comment on is data portability. It will drive competition if companies are forced to allow us to download our data in a way that allows us to upload it to another provider. Say I wanted to move from Twitter to Mastodon; what if I could download my data from Twitter, and upload it to Mastodon? At the moment, none of the companies really allow that, although that was supposed to happen under GDPR. The result is that monopolies maintain their status and competitors struggle to get new customers. Why did the Government not bring forward provision for improved data portability in the Bill? To draw on a thread of my speech, I fear that it may be because that is not in the interests of big tech, though it is in the interests of consumers.

I doubt that I will be on the Bill Committee. I am sorry that I will not be there with colleagues who seem to have already announced that they will be on it, but I am sure that they will all consider the issues that I have raised.