Ethics and Artificial Intelligence Debate
Full Debate: Read Full DebateLuke Graham
Main Page: Luke Graham (Conservative - Ochil and South Perthshire)Department Debates - View all Luke Graham's debates with the Department for Digital, Culture, Media & Sport
(6 years, 10 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I entirely concur; one of the long-standing rules of computer programming is “garbage in, garbage out”. That holds true here. Again, that is why transparency about what goes in is so important. I hope that the Minister will tell us what regulations are being considered to ensure that AI systems are designed in a way that is transparent, so that somebody can be held accountable, and how AI bias can be counteracted.
Increased transparency is crucial, but it is also vital that we put safeguards in place to make sure that that does not come at the cost of people’s privacy or security. Many AI systems have access to large datasets, which may contain confidential personal information or even information that is a matter of national security. Take, for example, an algorithm that is used to analyse medical records: we would not want that data to be accessible arbitrarily by third parties. The Government must be mindful of privacy considerations when tackling transparency, and they must look at ways of strengthening capacity for informed consent when it comes to the use of people’s personal details in AI systems.
We must ensure that AI systems are fair and free from bias. Returning to recruitment, algorithms are trained using historical data to develop a template of characteristics to target. The problem is that historical data itself often reveals pre-existing biases. Just a quarter of FTSE 350 directors are women, and fewer than one in 10 are from an ethnic minority; the majority of leaders are white men. It is therefore easy to see how companies’ use of hiring algorithms trained on past data about the characteristics of their leaders might reinforce existing gender and race imbalances.
The software company Sage has developed a code of practice for ethical AI. Its first principle stresses the need for AI to reflect the diversity of the users it serves. Importantly, that means ensuring that teams responsible for building AI are diverse. We all know that the computer science industry is heavily male dominated, so the people who develop AI systems are mainly men. It is not hard to see how that might have an impact on the fairness of new technology. Members may remember that Apple launched a health app that enabled people to do everything from tracking their inhaler use to tracking how much molybdenum they were getting from their soy beans, but did not allow someone to track their menstrual cycle.
We also need to be clear about who stands to benefit from new AI technology and to think about distributional effects. We want to avoid a situation where power and wealth lie exclusively in the hands of those with access to and understanding of these new technologies.
I congratulate the hon. Lady on securing the debate. It is reassuring that Liberal Democrat and Conservative Members are present to debate this important issue, albeit slightly disappointing that ours are the only parties represented. Will she join me in welcoming the centre for data ethics and innovation, which was announced in the Budget at the end of last year? Does she agree that it is important that whatever measures we take are UK-wide, so that statistics, ethics and the way we use data are standardised—to a very high standard—across the United Kingdom?
The hon. Gentleman, who is a fellow representative from Scotland, pre-empts the next section of my speech.
We need to develop good standards across the whole United Kingdom, but this issue in many ways transcends national boundaries. We must develop international consensus about how to deal with it, and I hope the UK takes a leading role in that. Parliament has started to look at the issue in recent years: the Select Committee on Science and Technology has produced a couple of reports about it, and the new House of Lords Select Committee on Artificial Intelligence is already doing great work and collecting interesting evidence. The Government have perhaps been slow to engage properly with ethical questions, but I have strong hopes that that will change now that the Minister is in post.
I very much welcome the announcement in the Budget of a new centre for data ethics and innovation. That is a good start, albeit long overdue. I found that announcement while reading the Red Book during the Budget debate—it was on page 45—and I even welcomed it in my speech. I am not sure anyone else had noticed it. I would welcome a clear update from the Minister on the expected timeline for that centre to be up and running. Where does she expect it to be based? What about the recruitment of its chair and key members of staff? How does she see it playing a role in advising policy making and engaging with relevant stakeholders?
I am concerned that the major Government-commissioned report, “Growing the artificial intelligence industry in the UK”, which was published in October, entirely omitted ethical questions. It specifically said:
“Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.”
I say very strongly that ethical questions should not be an afterthought. They should not be an add-on or a “nice to have”. Ethical discourse should be properly embedded in policy thinking. It should be a fundamental part of growing the AI industry, and it must therefore be a key job of the centre for data ethics and innovation. The Government have an important role to play, but I hope that the centre will work closely with industry too, because the way that industry tackles this issue is vital.
Regulation is important, and there are probably some gaps in it that we need to fill and get right, but this issue cannot be solved by regulation alone. I am interested in the Minister’s thoughts about that. Every doctor who enters the medical profession must swear the Hippocratic oath. Perhaps a similar code or oath of professional ethics could be developed for people working in AI—let me float the idea that it could be called the Lovelace oath in memory of the mother of modern computing—to ensure that they recognise their responsibility to embed ethics in every decision they take. That needs to become part and parcel of the way industry works.
Before I conclude, let me touch briefly on an issue that is outside the Minister’s brief but is nevertheless important. I am deeply concerned about the potential for lethal autonomous weapons—weapons that can seek and attack targets without human intervention—to cause absolute devastation. The ability for an algorithm to decide who to kill, and the morality of that, should worry us all. I very much hope that the Minister will work closely with her colleagues in the Ministry of Defence. The UK needs to lead discussions with other countries to get international consensus on the production and regulation of such weapons—ideally a consensus that they should be stopped—and to ensure that ethics are considered throughout.
We want the UK to continue to be a world leader in artificial intelligence, but it is vital that we also lead the discussion and set international standards about its ethics, in conjunction with other countries. Technology does not respect international borders; this is a global issue. We should not underestimate the astonishing potential of AI—leading academics are already calling this the fourth industrial revolution—but we must not shirk from addressing the difficult questions. What we are doing is a step in the right direction, but it is not enough. We need to go further, faster. After all, technology is advancing at a speed we have not seen before. We cannot afford to sit back and watch. Ethics must be embedded in the way AI develops, and the United Kingdom should lead the way.
My hon. Friend touches on some important considerations. There has been a debate in healthcare on how much should be private and how much should be anonymised and shared for the general good, as he outlines. I agree that that discussion needs to involve citizens, business, policy makers and technology specialists.
We will introduce a digital charter, which will underpin the policies and actions needed to drive innovation and growth while making the UK the safest and fairest place to be online. A key pillar of the charter will be the centre for data ethics and innovation, which will look ahead to advise Government and regulators on the best means of stewarding ethical, safe and innovative uses of AI and all data, not just personal data. It will be for the chair of the centre to decide how they should engage with their stakeholders and build a wider discussion, as my hon. Friend suggested is necessary. We expect that they will want to engage with academia, industry, civil society and indeed the wider public to build the future frameworks in which AI technology can thrive and innovate safely.
We may find the solutions to many AI challenges in particular sectors by making sure that, with the right tools, application of the existing rules can keep up, rather than requiring completely new rules just for AI. We all need to identify and understand the ethical and governance challenges posed by uses of such a new data source and decision-making process, now and in the future. We must then determine how best to identify appropriate rules, establish new norms and evolve policy and regulations.
When it comes to AI take-up and adoption, we need senior decision makers in business and the public sector first to understand and then discuss the opportunities and implications of AI. We want to see high-skill, well-paid jobs created, but we also want the benefits of AI, as a group of new general-purpose technologies, to be felt across the whole economy and by citizens in their private lives. The Government are therefore working closely with industry towards that end. As I said earlier, we will establish a new AI council to act as a leadership body and, in partnership with Government, champion adoption across the whole economy. Further support will come from Tech Nation as it establishes a national network of hubs to support such growth.
A highly skilled and diverse workforce is critical to growing AI in the UK. We therefore support the tech talent charter initiative to gain commitment to greater workforce diversity. The hon. Lady explained well in her speech why diversity in the tech workforce is important to the ethical considerations we are debating. As we expand our base of world-class AI experts by investing in 200 new AI PhDs and AI fellowships through the Alan Turing Institute, we will still need to attract the best and brightest people from around the world, so we have doubled the amount of exceptional talent visas to 2,000. I will take the point about the need for diversity when it comes to reviewing such applications. All of that will ensure that UK businesses have a workforce ready to shape the coming opportunities.
With regard to transition, we will see strong adaptation in our labour markets, where our aim should be lifelong learning opportunities to help people adapt to the changing pace of technology, which will bring new jobs and productivity gains. We must hope that those will increase employment. We know that some jobs may be displaced, and often for good reasons: dangerous, repetitive or tedious parts of work can now be carried out more quickly, accurately and safely by machines. None the less, human judgment and creativity will still be required to design and manage them.
On employment, may I impress on the Minister that in that disruption, the Government should be there to help some of those workers pushed out of employment to retrain and find a new place and role in the economy, keeping up with the pace of technology as it develops?
I heartily agree with my hon. Friend. He will be pleased to know that the Department for Business, Energy and Industrial Strategy—my former Department—is working closely with Matthew Taylor to consult on all of his recommendations. The Secretary of State has taken personal responsibility for improving the quality of work. Work should be good and rewarding.
A study from last year suggests that digital technologies including AI can create a net total of 80,000 new jobs annually for a country such as the UK. We want people to be able to capitalise on those opportunities, as my hon. Friend suggested. We already have a resilient and diverse labour market, which has adapted well to automation, creating more, higher paying jobs at low risk of automation. However, as the workplace continues to change, people must be equipped to adapt to it easily. Many roles, rather being directly replaced, will evolve to incorporate new technologies.