Artificial Intelligence (Select Committee Report)

Lord Clement-Jones Excerpts
Monday 19th November 2018

(5 years, 5 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Moved by
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - -

That this House takes note of the Report from the Select Committee on Artificial Intelligence AI in the UK: ready, willing and able? (HL Paper 100).

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, it was a pleasure and a privilege to chair the Select Committee on Artificial Intelligence. I thank members of the committee who engaged so closely with our subject matter over an intensive nine-month period and achieved such a high degree of unanimity. There were not just the formal sessions but a number of visits and workshops and even a neural network training session, ending with a fair few lively meetings deciding among ourselves what to make of it all.

Despite the limited life of the committee, we have not stopped talking about AI and its implications since, some of us in far-flung corners of the world. I regret that the noble Viscount, Lord Ridley, and the noble Lord, Lord Puttnam, having made such a major contribution to our work, are abroad for this debate.

I place on record a huge thanks to our team of clerks and advisers, without whom this report, which has been recognised as leading-edge nationally and internationally, could not have been written: our clerk, Luke Hussey; Dr Ben Taylor, our policy analyst; Hannah Murdoch, our committee assistant; and Dr Mateja Jamnik, our specialist adviser.

Our conclusions came after nine months of inquiry, consideration of some 225 written submissions of evidence and 22 sessions of fascinating oral testimony. I thank all our witnesses who gave such a great deal of time and commitment to the inquiry. I today thank the Minister who, with the right honourable Matt Hancock, gave extensive oral evidence. Since then, of course, Mr Hancock has been promoted twice. There is clearly a connection.

The context for our report was very much a media background of lurid forecasts of doom and destruction on the one hand and some rather blind optimism on the other. In our conclusions we were certainly not of the school of Elon Musk. On the other hand, we were not of the blind optimist camp. We are fully aware of the risks that the widespread use of AI could raise, but our evidence led us to believe that these risks are avoidable or can be mitigated to reduce their impact.

In considering this, we need to recognise that understanding the implications of AI here and now is important. AI is already with us in our smartphones and in our homes. Our task was,

“to consider the economic, ethical and social implications of advances in artificial intelligence”.

Our 74 recommendations were intended to be practical and to build upon much of the excellent work already being done in the UK, and revolved around a number of threads which run through the report.

The first is that the UK is an excellent place to develop AI and that people are willing to use the technology in their businesses and personal lives. There is no silver bullet, but we identified a range of sensible steps that will keep the UK on the front foot. They include making data more accessible to smaller businesses and asking the Government to establish a growth fund for SMEs through the British Business Bank to scale up their businesses domestically without having to worry about having to find investment from overseas or having prematurely to sell to a tech major. We said that the Government need to draw up a national policy framework, in lockstep with the industrial strategy, to ensure the co-ordination and successful delivery of AI policy in the UK.

A second thread relates to diversity and inclusion in education and skills, digital understanding, job opportunities, the design of AI and algorithms and the datasets used. In particular, the prejudices of the past must not be unwittingly built into automated systems. We said that the Government should incentivise the development of new approaches to the auditing of datasets used in AI and encourage greater diversity in the training and recruitment of AI specialists.

A third thread relates to equipping people for the future. AI will accelerate the digital disruption in the jobs market. Many jobs or tasks will be enhanced by AI, many will disappear and many new, as yet unknown, jobs will be created. AI will have significant implications for the ways in which society lives and works. Whatever the scale of the disruption, a significant government investment in skills and training is imperative if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth. Retraining will become a lifelong necessity and initiatives, such as the Government’s national retraining scheme, must become a vital part of our economy. We said that this will need to be developed in partnership with industry, and lessons must be learned from the apprenticeships scheme. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. For a proportion, this will mean a thorough education in AI-related subjects, requiring adequate resourcing of the computing curriculum and support for teachers. For all children, the basic knowledge and understanding necessary to navigate an AI-driven world will be essential. In particular, we recommended that the ethical design and use of technology becomes an integral part of the curriculum. I should add that our evidence strongly suggested that the skills requirements of the future will be as much creative as scientific.

A fourth thread is that individuals need to be able to have greater personal control over their data and the way in which it is used. We need to get the balance right between maximising the insights that data can provide to improve services and ensuring that privacy is protected. This means using established concepts such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability, hubs of all things and data trusts.

AI has the potential to be truly disruptive to business and to the delivery of public services. For example, AI could completely transform our healthcare, both administratively and clinically, if NHS data is labelled, harnessed and curated in the right way. However, it must be done in a way that builds public confidence. Transparency in AI is needed. We recommended that industry, through the new AI council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.

Of particular importance to the committee was the need to avoid data monopolies, particularly by the tech majors. Large companies that have control over vast quantities of data must be prevented from becoming overly powerful within the AI landscape. In our report we called upon the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK. It is vital that SMEs have access to datasets so that they are free to develop AI.

The fifth and unifying thread is that an ethical approach is fundamental to making the development and use of AI a success for the UK. A great deal of lip service is being paid to the ethical development of AI, but we said that the time had come for action and suggested five principles that could form the basis of a cross-sector AI code. They should be agreed and shared widely and work for everyone. Without this, an agreed ethical approach will never be given a chance to get off the ground. We did not suggest any new regulatory body for AI, taking the view that ensuring that ethical behaviour takes place should be the role of existing regulators, whether the FCA, the CMA, the ICO or Ofcom. We believe also that in the private sector there is a strong potential role for ethics advisory boards.

AI is not without its risks, as I have emphasised, and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach will ensure that the public trust this technology and see the benefits of using it. It will also prepare them to challenge its misuse. All this adds up to a package that we believed would ensure that the UK could remain competitive in this space while retaining public trust. In our report we asked whether the UK was ready, willing and able to take advantage of AI.

The big question is therefore whether the Government have accepted all our recommendations. I must tell your Lordships that it is a mixed scorecard. On the plus side, there is acceptance of the need to retain and develop public trust through an ethical approach, both nationally and internationally. A new chair has been appointed to the Centre for Data Ethics and Innovation and a consultation started on its role and objectives, including the exploration of governance arrangements for data trusts and access to public datasets, and the centre is now starting two studies on bias and microtargeting. Support for data portability is now being established. There is recognition by the CMA of competition issues around data monopoly. There is recognition of a need for,

“multiple perspectives and insights ... during the development, deployment and operation of algorithms”—

that is, recognition of the need for diversity in the AI workforce. And there is commitment to a national retraining scheme.

On the other side, the recent AI sector deal is a good start, but only a start towards a national policy framework. Greater ambition is needed. Will the new government Office for AI deliver this in co-ordination with the new council for AI? I welcome Tabitha Goldstaub’s appointment as chair, but when will it be up and running? Will the Centre for Data Ethics and Innovation have the resources it needs, and will it deliver a national ethical framework?

There was only qualified acceptance by the Department of Health of the need for transparency, particularly in healthcare applications. In the context of the recent DeepMind announcement that its Streams project is to be subsumed by Google and, moreover, that it is winding up its independent review panel, what implications does that have for the health service, especially in the light of previous issues over NHS data sharing?

The Department for Education was defensive on apprenticeships and skills shortages and appears to have limited understanding of the need for creative and critical thinking skills as well as computer skills.

The MoD in its response sought to rely on a definition of lethal autonomous weapons distinguishing between automated and autonomous weapons which no other country shares. This is deeply worrying, especially as it appears that we are developing autonomous drone weaponry. I would welcome comment by the Minister on all those points.

Some omens from the Government are good; others are less so. We accepted that AI policy is in its infancy in the UK and that the Government have made a good start in policy-making. Our report was intended to be helpful in developing that policy to ensure that it is comprehensive and co-ordinated between all its different component parts.

By the same token, I hope that the Government will accept the need for greater ambition and undertake to improve where their response has been inadequate. I beg to move.

--- Later in debate ---
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - -

My Lords, every Select Committee hopes for a debate as good as this one. The noble Lord, Lord Stevenson, pointed out the exceptional number of non-committee members who have taken part. That is a sign of the quality of today’s debate and the points made. Noble Lords showed expertise in so many different sectors: healthcare, defence, film, industry, financial services and the future. Not all noble Lords have recently published books on the future, but the contribution from the noble Lord, Lord Rees, was much appreciated.

Nearly all speakers emphasised the need for momentum in developing not only AI but the ethical frameworks that we need. Quite frankly, we are still in the foothills. The issue will become of greater importance as we combine it with all the other technologies such as the internet of things and blockchain. We need to be absolutely clear that our policy must be active. We must also have the means of scrutiny. I hope that the House will come back to this, perhaps in one of the other Select Committees, rather than an ad hoc one. As things move on so quickly in this area, we need to keep abreast of developments. The mantra that I repeat to myself, pretty much daily, is that AI should be our servant not our master. I am convinced that design, whether of ethics, accountability or intelligibility, is absolutely crucial. That is the way forward and I hope that, by having that design, we can maintain public trust. We are in a race against time and we have to make sure we are taking the right steps to retain that trust.

I thank all noble Lords for this debate. This is only the first chapter; there is a long road to come.

Motion agreed.