All 1 Debates between Dawn Butler and Jo Gideon

Artificial Intelligence

Debate between Dawn Butler and Jo Gideon
Thursday 29th June 2023

(1 year, 4 months ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Jo Gideon Portrait Jo Gideon (Stoke-on-Trent Central) (Con)
- View Speech - Hansard - - - Excerpts

It is a pleasure to follow the hon. Member for Brent Central (Dawn Butler). I join everyone in congratulating my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing this important debate.

Everybody is talking about artificial intelligence, which is everywhere. An article in The Sentinel, Stoke’s local paper, recently caught my eye. Last week, the Home Secretary visited my constituency to open a Home Office facility in Hanley, a development providing more than 500 new jobs in Stoke-on-Trent. The article reflected on the visit and, amusingly, compared the Home Secretary’s responses to questions posed by the local media with the responses from an AI. Specifically, the Home Secretary was asked whether Stoke-on-Trent had taken more than its fair share of asylum seekers through the asylum dispersal scheme, and about the measures she is taking to ensure that asylum seekers are accommodated more evenly across the country. She replied:

“The new Home Office site is a vote of confidence in Stoke-on-Trent... They will be helping to bring down the asylum backlog and process applications more quickly.”

The same question was posed to ChatGPT, which was asked to respond as if it were the Home Secretary. The AI responded:

“I acknowledge the city has indeed taken on a significant number of asylum seekers. This kind of uneven distribution can place stress on local resources and create tension within communities. It is clear we need a more balanced approach that ensures all regions share responsibility and benefits associated with welcoming those in need.”

The AI also referred to reviewing the asylum dispersal scheme, strengthening collaboration with local authorities, infrastructure development and the importance of public awareness and engagement.

We all know what it is like to be on the receiving end of media questions, and a simple and straightforward answer is not always readily available. I suppose the AI’s response offers more detail but, unsurprisingly, it does not tell us anything new. It is, after all, limited by the information that is currently on the internet when formulating its answers. Thankfully, AI is not taken to making things up—hopefully that will not happen, but it is one of the big debates.

This begs the question: what is truth? That is the fundamental question on this topic. We must develop a robust ethical framework for artificial intelligence. The UK should be commended for embracing the spirit of an entrepreneurial and innovative approach to artificial intelligence. We know that over-regulation stifles creativity and all the good things it has to offer. However, AI has become consumer-focused and increasingly accessible to people without technical expertise. Our regulatory stance must reflect this shift. Although there should be a departure from national regulatory micromanagement, the Government have a role to play in protecting the public against potential online harms. It cannot be left to self-regulation by individual companies.

Let us also remember that artificial intelligence operates within a global space. We cannot regulate the companies that are developing this technology if they are based in another nation. This is a complicated space in which to navigate and create safeguards.

Balancing those concerns is increasingly complex and challenging, and conversations such as this must help us to recognise that regulation is not impossible and that it is incredibly important to get it right. For example, when the tax authorities in the Netherlands employed an AI tool to detect potential childcare benefit fraud, it made mistakes, resulting in innocent families facing financial ruin and thousands of children being placed in state custody as a result of accusations. When the victims tried to challenge the decision, they were told that officials could not access the algorithmic inputs, so they were unable to establish how decisions had been made. That underlines the importance of checks and balances.

Dawn Butler Portrait Dawn Butler
- Hansard - -

The hon. Lady is absolutely right on these concerns, especially as regards the Home Office. Big Brother Watch’s “Biometric Britain” report spoke about how much money the Home Office is paying to companies, but we do not know who they are. If we do not know who these companies are, we will not then know how they gather, develop and use their data. Does she think it is important that we know who is getting money for what?

Jo Gideon Portrait Jo Gideon
- Hansard - - - Excerpts

The hon. Lady makes a good point. Clearly, that is the big part of this debate: we have to have transparency, as it is essential. The Government’s current plans, set out in the AI White Paper, do not place any new obligations on public bodies to be transparent about their use of AI; to make sure their AI tools meet accuracy and non-discrimination standards, as she rightly said; or to ensure that there are proper mechanisms in place for challenging or getting redress when AI decisions go wrong. What the White Paper proposes is a “test and learn” approach to regulation, but we must also be proactive. Technology is changing rapidly, while policy lags behind. Once AI is beyond our control, implementing safeguards becomes implausible. We should acknowledge that we cannot afford to wait to see how its use might cause harm and undermine trust in our institutions.

While still encouraging sensible innovation, we should also learn from international experiences. We must encourage transparency and put in place the proper protections to avoid damage. Let us consider the financial sector, where banks traditionally analyse credit ratings and histories when deciding who to lend money to. I have recently been working with groups such as Burnley Savings and Loans, which manually underwrites all loans and assesses the risk of each loan by studying the business models and repayment plans of its customers. Would it be right to use AI to make such decisions? If we enter a world where there is no scope for gut feeling, human empathy and intuition, do we risk impoverishing our society? We need to be careful and consider how we want to use AI, being ethical and thoughtful, and remaining in control, rather than rolling it out wherever possible. We must strike the right balance.

Research indicates that AI and automation are most useful when complemented by human roles. The media can be negative about AI’s impact, leading to a general fear that people will lose their jobs as a result of its growth. However, historically, new technology has also led to new careers that were not initially apparent. It has been suggested that the impact of AI on the workplace could rival that of the industrial revolution. So the Government must equip the workforce of the future through skills forecasting and promoting education in STEM—science, technology, engineering and maths.

Furthermore, we must remain competitive in AI on the global stage, ensuring agility and adaptability, in order to give future generations the best chances. In conjunction with the all-party group on youth affairs, the YMCA has conducted polling on how young people feel about the future and the potential impact of AI on their careers. The results are going to be announced next month. It found that AI could not only lead to a large amount of job displacement, but provide opportunities for those from non-traditional backgrounds. More information on skills and demand will help inform young people to identify their career choices and support industries and businesses in preparing for the impact of AI.

I am pleased that the Department for Education has already launched a consultation on AI education, which is open until the end of August. Following that, we should work hard to ensure that schools and universities can quickly adapt to AI’s challenges. Cross-departmental discussion is important, bringing together AI experts and educators, to ensure that the UK is at the cutting edge of developments with AI and to provide advice to adapt to younger generations.

AI is hugely powerful and possesses immense potential. ChatGPT has recently caught everybody’s attention, and it can create good stories and news articles, like the one I shared. But that technology has been used for years and, right now, we are not keeping up. We need to be quicker at adapting to change, monitoring closely and being alert to potential dangers, and stepping in when and where necessary, to ensure the safe and ethical development of AI for the future of our society and the welfare of future generations.