(6 years, 8 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
They are all on my list. I am glad hon. Members think about them, because we in York have a fantastic history, but York is also the UNESCO city of media arts. It is part of the Creative Cities Network and hosts the Mediale festival. It leads our country in the digital creative sector and has created the first guild of media arts—the first guild for 700 years. It is also home to the digital signalling centre, which is at the heart of the next generation of rail.
The film industry is on our doorstep with Screen Yorkshire. The British film industry is the UK’s fastest growing sector, and Yorkshire leads the way. Our university is at the heart of that.
It is a pleasure to serve under your chairmanship, Ms McDonagh. I congratulate the hon. Member for St Albans (Mrs Main) on securing this important debate.
A couple of months ago I was in this Chamber debating ethics and artificial intelligence, and I suggested a code of ethics for people working in data, perhaps to be named the Lovelace code of ethics. I was delighted, two months later, to see that the Nuffield Foundation recently set up an Ada Lovelace Institute to look into data ethics. That is a think-tank with £5 million of investment, so I have new respect for the power and reach of Westminster Hall debates.
I was also delighted to see the House of Lords report on artificial intelligence on Monday. It is right for Parliament to discuss those new technological frontiers. In fact, they should be at the forefront of our debates. I want to touch briefly on data, accountability, skills and inequality. There is a huge issue about who owns our data. The new general data protection regulation is welcome in helping to give consumers control. When I was Consumer Affairs Minister, a fledgling project called “midata” was all about the principle that people’s data should be their own; if they wanted it from companies, they should be able to get access to it in a machine-readable format, so that it could be used for their benefit.
The world has obviously moved on somewhat in five years, and that was a fledgling effort, but the issue of data as currency will become more important in years to come. The Consumer Rights Act 2015 recognised that data could be treated as consideration: if someone had exchanged their data to get a product, they should still have some consumer rights and protections, for example if the product damaged their equipment. The business models that we are talking about in the tech sector require a greater level of consumer choice and transparency about the transaction that people make when they hand over data. The current model is one where people give their data away willy-nilly for free services, often with little control for the individual. In the future, initiatives such as private data accounts could be a mechanism giving people more control over their data. I am interested not just in whether the public sector can monetise large data sets, but in whether individuals might be in a position to have their own data monetised much more explicitly.
As for accountability, there have been all sorts of scandals, from fake news to online abuse, and the polarisation of debate coming from social media companies. Yet Facebook is only 13 years old, and Twitter, Snapchat and Instagram are all younger, so perhaps it is no surprise that innovation has outstripped regulation in that area. However, those platforms are changing much about society and need to be held to account. Many of those companies have huge monopoly power, and the network effect makes that almost automatic and inevitable for new platforms that are set up, but I do not think the Competition and Markets Authority has yet grappled sufficiently with the issues. The European Commission is perhaps one of the few organisations to have been able properly to stand up to those corporate giants, whether on tax, data issues or competition.
We need to do more about skills, in schools and through retraining. I agree with the hon. Member for Bristol North West (Darren Jones) about diversity in the technology workforce and that situation leading to bizarre decisions, because it is even less representative than most other sectors. I also agree about constraints on skilled workers coming to the UK. That is a problem that I fear will get worse after Brexit. We have just seen the cap for tier 2 visas for skilled workers from outside the European Economic Area and Switzerland reached for an unprecedented fourth month in a row. Until last December, that quota had been reached only once. There is concern about whether companies in the UK can get the skills they need. I declare an interest as a very minor shareholder of a data start-up, Clear Returns, on whose board I served while I was out of Parliament. I can attest, from that experience, to how difficult it is for tech companies to get access to the skills of data scientists and analysts that they need.
Finally—I am conscious of the time, Ms McDonagh—I want to speak about inequality. Inequality in technological skills needs to be addressed, as does inequality in access to broadband in different parts of the country. I am still astonished that a new development in my constituency, which was built in the last few years in Woodilee, does not have adequate broadband. That was entirely predictable, and I have written to Ministers about it. There is also a wider issue of the huge opportunities that technology provides for solving problems in society, and the real risk that that will entrench existing inequalities, particularly economic ones. If we do not do something about it, those with capital to invest in tech companies will be those who reap the rewards. Instead, we should be using automation to take drudgery out of jobs and strenuous heavy lifting out of the care sector, so that we leave more time for humanity and for those job areas to which we as individuals can contribute with creativity and higher skills.
We must also allow people to build more relationships outside work. Given the way that taxation works with the larger, global tech companies, and the way that the benefits will be accrued, I fear that we could risk driving serious increases in inequality, and that those who lose out by losing their jobs will not be compensated in appropriate ways. That risks division in wider society more generally.
I know that we have little time in this debate, so I will bring my remarks to a close, but I hope I have flagged up some key issues that the House will return to when discussing these matters, which I hope we will do more often in future.
I will now call the Front-Bench speakers. If they each speak for eight or nine minutes, that will allow Mrs Main some time to sum up the debate.
(6 years, 11 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I beg to move,
That this House has considered ethics and artificial intelligence.
It is a pleasure to serve under your chairmanship, Dame Cheryl. I welcome the Minister to her new role, following the reshuffle last week. She leaves what was also a wonderful role in Government—I can say that from personal experience—but I am sure that she will find the challenges of this portfolio interesting and engaging. No doubt she is already getting stuck in.
I would like to start with the story of Tay. Tay was an artificial intelligence Twitter chatbot developed by Microsoft in 2016. She was designed to mimic the language of young Twitter users and to engage and entertain millennials through casual and playful conversation.
“The more you chat with Tay the smarter she gets”,
the company boasted. In reality, Tay was soon corrupted by the Twitter community. Tay began to unleash a torrent of sexist profanity. One user asked,
“Do you support genocide?”,
to which Tay gaily replied, “I do indeed.” Another asked,
“is Ricky Gervais an atheist?”
The reply was,
“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.
Those are some of the tamer tweets. Less than 24 hours after her launch, Microsoft closed her account. Reading about it at the time, I found the story of Tay an amusing reminder of the hubris of tech companies. It also reveals something darker: it vividly demonstrates the potential for abuse and misuse of artificial intelligence technologies and the serious moral dilemmas that they present.
I say at the outset that I believe artificial intelligence can be a force for good, if harnessed correctly. It has the potential to change lives, to empower and to drive innovation. In healthcare, the use of AI is already revolutionising the way health professionals diagnose and treat disease. In transport, the rise of autonomous vehicles could drastically reduce the number of road deaths and provide incredible new opportunities for millions of disabled people. In our everyday lives, new AI technologies are streamlining menial tasks, giving us more time in the day for meaningful work, for leisure or for our family and friends. We are on the cusp of something quite extraordinary and we should not aim deliberately to suppress the growth of new AI, but there are pressing moral questions to be answered before we jump head first into AI excitement. It is vital that we address those urgent ethical challenges presented by new technology.
I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI. How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?
I congratulate the hon. Lady on this debate; it is a fascinating area and I am grateful to be able to speak. On her last point, I understand that in parts of the United States where that technology is used, there are instances where the judges go one step further and rely on those decisions as reasons to do things. The decision is made on incorrect information in the first instance, and then judges say that because a machine has made that decision, it must be even better than manual intervention.
The hon. Gentleman is quite right to raise that concern, because that goes to the heart of the issue, particularly when risk data is presented as incontrovertible fact and is relied on for the decision. It is absolutely essential that those decisions can be interrogated and understood, and that any bias is identified. That is why ethics must be at the heart of this whole issue, even before systems are developed in the first place.
In addition to the likely reoffending data, there is a female sex robot designed with a “frigid” setting, which is programmed to resist sexual advances. We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned. AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.
On machine learning, a report last year by the Royal Society highlighted a range of concerns among members of the public. Some were worried about the potential for direct harm, from accidents in autonomous vehicles to the misdiagnosis of disease in healthcare. Others were more concerned about potential job losses or the perceived loss of humanity that could result from wider use of machine learning. The importance of public engagement and dialogue was acknowledged by the Minister’s Department in its 2016 report. I would welcome an update from her on the kind of public engagement work she thinks is important with regard to AI.
I will turn to the related considerations of transparency and accountability. When we talk about transparency in the context of AI, what we really mean is that we want to understand how AI systems think and to understand their decision-making processes. We want to avoid situations of “black-boxing”, where we cannot understand, access or explain the decisions that technology makes. In practice, that transparency means several things: it might involve creating logging mechanisms that give us a step-by-step account of the processes involved in the decision making; or it could mean providing greater visibility of data access. I would be interested to hear the Minister’s thoughts on the relative merits of those practices. Either way, transparency is particularly important for those instances when we want to challenge decisions made by AI systems. Transparency informs accountability. If we can see how decisions are made, it is easier for us to understand what has happened and who is responsible when things go wrong.
Increasingly, major companies such as Deutsche Bank and Citigroup are turning to machine learning algorithms to streamline and refine their recruitment processes. Let us suppose that we suspect that an algorithm is biased towards candidates of a particular race and gender. If the decision-making process of the algorithm is opaque, it is hard to even work out whether employment law is being broken—an issue I know will be close to the Minister’s heart. Transparency is crucial when it comes to the accountability of new AI. We must ensure that when things go wrong, people can be held accountable, rather than shrugging and responding that the computer says “don’t know”.
I will try not to intervene too much, but the point about transparency in the process and the decision making relates to the data that is used as an input. It is often the case in these instances that machine learning is simply about correlations and patterns in a wide scheme of data. If that data is not right in the first instance, subjective and inaccurate decisions are created.
I entirely concur; one of the long-standing rules of computer programming is “garbage in, garbage out”. That holds true here. Again, that is why transparency about what goes in is so important. I hope that the Minister will tell us what regulations are being considered to ensure that AI systems are designed in a way that is transparent, so that somebody can be held accountable, and how AI bias can be counteracted.
Increased transparency is crucial, but it is also vital that we put safeguards in place to make sure that that does not come at the cost of people’s privacy or security. Many AI systems have access to large datasets, which may contain confidential personal information or even information that is a matter of national security. Take, for example, an algorithm that is used to analyse medical records: we would not want that data to be accessible arbitrarily by third parties. The Government must be mindful of privacy considerations when tackling transparency, and they must look at ways of strengthening capacity for informed consent when it comes to the use of people’s personal details in AI systems.
We must ensure that AI systems are fair and free from bias. Returning to recruitment, algorithms are trained using historical data to develop a template of characteristics to target. The problem is that historical data itself often reveals pre-existing biases. Just a quarter of FTSE 350 directors are women, and fewer than one in 10 are from an ethnic minority; the majority of leaders are white men. It is therefore easy to see how companies’ use of hiring algorithms trained on past data about the characteristics of their leaders might reinforce existing gender and race imbalances.
The software company Sage has developed a code of practice for ethical AI. Its first principle stresses the need for AI to reflect the diversity of the users it serves. Importantly, that means ensuring that teams responsible for building AI are diverse. We all know that the computer science industry is heavily male dominated, so the people who develop AI systems are mainly men. It is not hard to see how that might have an impact on the fairness of new technology. Members may remember that Apple launched a health app that enabled people to do everything from tracking their inhaler use to tracking how much molybdenum they were getting from their soy beans, but did not allow someone to track their menstrual cycle.
We also need to be clear about who stands to benefit from new AI technology and to think about distributional effects. We want to avoid a situation where power and wealth lie exclusively in the hands of those with access to and understanding of these new technologies.
I congratulate the hon. Lady on securing the debate. It is reassuring that Liberal Democrat and Conservative Members are present to debate this important issue, albeit slightly disappointing that ours are the only parties represented. Will she join me in welcoming the centre for data ethics and innovation, which was announced in the Budget at the end of last year? Does she agree that it is important that whatever measures we take are UK-wide, so that statistics, ethics and the way we use data are standardised—to a very high standard—across the United Kingdom?
The hon. Gentleman, who is a fellow representative from Scotland, pre-empts the next section of my speech.
We need to develop good standards across the whole United Kingdom, but this issue in many ways transcends national boundaries. We must develop international consensus about how to deal with it, and I hope the UK takes a leading role in that. Parliament has started to look at the issue in recent years: the Select Committee on Science and Technology has produced a couple of reports about it, and the new House of Lords Select Committee on Artificial Intelligence is already doing great work and collecting interesting evidence. The Government have perhaps been slow to engage properly with ethical questions, but I have strong hopes that that will change now that the Minister is in post.
I very much welcome the announcement in the Budget of a new centre for data ethics and innovation. That is a good start, albeit long overdue. I found that announcement while reading the Red Book during the Budget debate—it was on page 45—and I even welcomed it in my speech. I am not sure anyone else had noticed it. I would welcome a clear update from the Minister on the expected timeline for that centre to be up and running. Where does she expect it to be based? What about the recruitment of its chair and key members of staff? How does she see it playing a role in advising policy making and engaging with relevant stakeholders?
I am concerned that the major Government-commissioned report, “Growing the artificial intelligence industry in the UK”, which was published in October, entirely omitted ethical questions. It specifically said:
“Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.”
I say very strongly that ethical questions should not be an afterthought. They should not be an add-on or a “nice to have”. Ethical discourse should be properly embedded in policy thinking. It should be a fundamental part of growing the AI industry, and it must therefore be a key job of the centre for data ethics and innovation. The Government have an important role to play, but I hope that the centre will work closely with industry too, because the way that industry tackles this issue is vital.
Regulation is important, and there are probably some gaps in it that we need to fill and get right, but this issue cannot be solved by regulation alone. I am interested in the Minister’s thoughts about that. Every doctor who enters the medical profession must swear the Hippocratic oath. Perhaps a similar code or oath of professional ethics could be developed for people working in AI—let me float the idea that it could be called the Lovelace oath in memory of the mother of modern computing—to ensure that they recognise their responsibility to embed ethics in every decision they take. That needs to become part and parcel of the way industry works.
Before I conclude, let me touch briefly on an issue that is outside the Minister’s brief but is nevertheless important. I am deeply concerned about the potential for lethal autonomous weapons—weapons that can seek and attack targets without human intervention—to cause absolute devastation. The ability for an algorithm to decide who to kill, and the morality of that, should worry us all. I very much hope that the Minister will work closely with her colleagues in the Ministry of Defence. The UK needs to lead discussions with other countries to get international consensus on the production and regulation of such weapons—ideally a consensus that they should be stopped—and to ensure that ethics are considered throughout.
We want the UK to continue to be a world leader in artificial intelligence, but it is vital that we also lead the discussion and set international standards about its ethics, in conjunction with other countries. Technology does not respect international borders; this is a global issue. We should not underestimate the astonishing potential of AI—leading academics are already calling this the fourth industrial revolution—but we must not shirk from addressing the difficult questions. What we are doing is a step in the right direction, but it is not enough. We need to go further, faster. After all, technology is advancing at a speed we have not seen before. We cannot afford to sit back and watch. Ethics must be embedded in the way AI develops, and the United Kingdom should lead the way.
I heartily agree with my hon. Friend. He will be pleased to know that the Department for Business, Energy and Industrial Strategy—my former Department—is working closely with Matthew Taylor to consult on all of his recommendations. The Secretary of State has taken personal responsibility for improving the quality of work. Work should be good and rewarding.
A study from last year suggests that digital technologies including AI can create a net total of 80,000 new jobs annually for a country such as the UK. We want people to be able to capitalise on those opportunities, as my hon. Friend suggested. We already have a resilient and diverse labour market, which has adapted well to automation, creating more, higher paying jobs at low risk of automation. However, as the workplace continues to change, people must be equipped to adapt to it easily. Many roles, rather being directly replaced, will evolve to incorporate new technologies.
The Minister has mentioned the centre for data ethics. Can she update us on when it is likely to be up and running, what the timetable is for recruiting the chair and so on? It would be helpful to know when we can expect that.
We want to proceed at pace, because it is an important part of our programme of dealing with the ethics of this issue. We plan to consult on the plans for a permanent centre in the next few months, and I will welcome the hon. Lady’s input.
Undeniably, substantial changes lie ahead. Therefore, in terms of enabling people to reskill and take advantage of the changes and opportunities in the workplace, a national retraining scheme will help people. We also have plans to upskill 8,000 computer science teachers and work with industry to set up a new national centre for computing education, with a brief to encourage more girls to take advantage of the new technologies in their learning.
Substantial changes lie ahead and, as we push these new technologies, we will also strive to keep people and businesses sufficiently skilled, adaptable and assured. The measures are in place, and I have taken heart from the hon. Lady’s speech about the importance of these ethical considerations. I assure her that they will be uppermost in our minds as we develop policy.
Question put and agreed to.
(7 years ago)
Commons ChamberThat is a very good point, which I will come back to. The Minister now has advance notice that he needs to be prepared to answer that question, because it is clearly a source of concern.
There is no soft power in Putin’s eyes and, as far as he is concerned, the use of social media to interfere in foreign states is a vital, weaponised tool. The covert interference I referred to is supplemented by more overt attempts to create a media counter-narrative. I am now talking about RT. The RT chief editor, Margarita Simonyan, is on the record comparing RT to the Ministry of Defence, saying in 2008:
“We were fighting the information war against the whole of the Western world”.
She referred to “the information weapon”, which is used in “critical moments”, and said that RT’s task in peacetime is to build an audience, so they can fight the information war better next time. Not surprisingly, therefore, Chatham House and the Henry Jackson Society see RT as a tool of destabilisation from the Kremlin.
Members will know that RT was found in breach by Ofcom in September 2015 for stories about Assad and chemical weapons. However, as I understand it, Ofcom has not always enforced sanctions as and when appropriate. According to the Library, Sputnik has never been found in breach by Ofcom. Ofcom imposed 84 sanctions against 57 broadcasters in the 10 years up to March 2017—RT was not the subject of a sanction during that time—and found broadcasters in breach of the broadcasting code more than 2,500 times
I am certainly not advocating shutting down RT, and I do not think anyone else is. I just want to ensure that it abides by the broadcasting rules and that appropriate action is taken by Ofcom every time it does not. Is the Minister happy with Ofcom’s actions? Does it consistently pursue RT for breaches in the way he would like? As an aside, I would like Ofcom to be much more active in pursuing a number of other TV channels that are broadcast here, in particular when threats are made to the Ahmadi Muslim community on some of those channels.
No British parliamentarian should be taking money from RT. In fact, I would go one step further and say that, frankly, no British parliamentarian should appear on RT. The only exception to that rule might be if they have complete control and are completely unedited—if they can go on the channel and say what they want, knowing that it will not be chopped, edited and cut by RT. Apart from that, no one here or in the House of Lords should ever appear on that channel. The only time that RT ever contacts me is when I have said something critical about the Government. Well, I am happy to say critical things about the Government on the BBC, but RT is trying to create an agenda that is about attacking the Government at every turn, and I will not facilitate that process.
The next issue is the question of whether the Russians are infiltrating or leaking content from political party systems. Well, we know what they did regarding the Democrats. Incidentally, they also hacked the Republicans, but they only released the information on the Democrats. We also know that they attempted to infiltrate Macron’s team by setting up a number of websites with pseudo-official titles that would email Macron’s members of staff, trying to get them to click on links and provide back-door access to their systems. As I understand it, Macron managed to defeat that, mainly by inserting some fake news into the content that the Russians were trying to access so that the story was demolished because of the inconsistencies within it.
As Members will know, Monsieur Macron had a more aggressive and muscular stance towards Russia than any other parties in that French presidential election, and I believe that that is why he was targeted in a way in which the others were not. As I understand it, the other French political parties were targeted, but the Russians were clearly interested in releasing information that related to Macron in particular. Mr Putin has said that these hackers may not be associated with the Government and that they may be “patriotic” hackers. Well, they may be patriotic hackers as far as he is concerned, but one has to suspect that they have the Government’s endorsement, because I am sure that the Russian Government could clamp down on these so-called patriotic hackers if they wanted to do so.
I am trying to make my questions very clear because I know that the officials in the Box can then provide a written answer for the Minister to read out and get on the record straightaway, so I have another easy question for him. Will he consider making UK political parties part of the critical national infrastructure, and what are the implications of taking such a step?
To be able to ascertain the level of threat, we have to assess it accurately, otherwise I risk coming across as a conspiracy theorist. I know that I do already in relation to Brexit, but I do not want to become the person known for conspiracy theories in this place. The difficulty we have is that we do not really know the extent of the activity because, frankly, no one has investigated it properly yet. It is only when that has been done that we will know. I regret that it took so long for the Intelligence and Security Committee to be reconstituted, but I welcome the fact that it has stated that Russia will be a topic that it will focus on. Does the Minister think that the Committee should give priority to the subject? Would he also want the ISC to work effectively with the Electoral Commission so that it can go to places that the Electoral Commission cannot? An ISC inquiry would help us to establish accurately the level of threat.
To pick up on an earlier intervention, we know that Facebook was asked by the Electoral Commission to look at examples of paid ads from Russia, but it was not asked to look at the use of bots or trolls, so the picture we are going to get will, at best, be very incomplete. The response the commission has had—that the Russians apparently spent £7.50 on advertising—does not quite sound right to me.
I congratulate my right hon. Friend on securing the debate. We are not talking just about a few Twitter or Facebook accounts with no picture avatar and 10 followers. The David Jones account had more than 100,000 followers and was listed as one of the most influential Twitter accounts during the last general election. It purports to be from Southampton, yet it tweets exclusively in office hours in a Russian time zone. Surely the social media companies have a greater role to play in identifying fake accounts—which are pretending to be something they are not—for the integrity of the debate we should all enjoy online.