Artificial Intelligence (Select Committee Report)

Viscount Hanworth Excerpts
Monday 19th November 2018

(6 years ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Viscount Hanworth Portrait Viscount Hanworth (Lab)
- Hansard - -

My Lords, artificial intelligence is a concept that is not amenable to a precise definition, albeit many have been attempted. In a narrow sense, AI denotes the ability of machines to mimic the cognitive skills of human beings, including learning and problem-solving. In a broad sense, it denotes any decision-making that is mediated by the computer. The popular concept of AI has been greatly influenced by the test proposed by Alan Turing in 1950. Turing asserted that if an individual working for an extended period at a keyboard could not reliably determine whether their respondent was a human or a machine when it was in fact a machine, then that machine could be said to exhibit artificial intelligence.

This notion of artificial intelligence places a high requirement on the machine. It also engenders the fear and the anxiety that, with the advent of AI, people will be manipulated, increasingly, by impersonal and malign forces, devoid of human empathy and understanding. The right reverend Prelate the Bishop of Oxford, among others, alluded to such anxieties. A different and a carefree definition of artificial intelligence has been advanced by Larry Tesler. He has observed that AI connotes anything that has yet to be achieved by computers. What has already been achieved, such as speech recognition or optical character recognition, is liable to be regarded merely as computer technology.

Doubts about the definition are reflected in the introduction to the excellent report from the Select Committee on Artificial Intelligence by a word cloud illustrating definitions of artificial intelligence. The report also contains a brief history of the progress of AI, in which mention is made of the aspersion against James Lighthill that he was responsible for arresting its development in the UK via an adverse report delivered to the Science Research Council in 1973. Lighthill merely asserted that AI was not a coherent academic discipline and that, as such, it did not warrant specific funding. It should also be said that some of the concepts that appear to be at the forefront of modern endeavours, such as artificial neural networks and Bayesian learning, have been around for a very long time.

Notwithstanding these doubts about a definition, the committee has produced a well-focused report. Faced with the rapidly increasing application of computers in diverse spheres of decision-making, it highlights the hazards of their misapplication and advocates a wide range of measures that should be taken to counteract the dangers. To a lesser extent, it identifies steps that can be taken to maximise the benefits arising from the application of computers in decision-making.

Some of the hazards that the report has identified are well known. Among these is the criminal use of computers, commonly involving fraud and impersonation. These are too well known for me to dwell upon them at length: indeed, Members of Parliament are regularly alerted to such hazards. The threats to our democratic process from fake news and from personalised campaign messages conveyed by digital media have also achieved prominence recently. The novelty in these threats is in the power and the prevalence that they have achieved in consequence of the hugely increased processing powers of computers. The hazards that I wish to highlight are of a different kind. They stem to a large extent from the lack of numeracy on the part of many of our decision-makers, who may not have had any scientific education.

The first of these hazards is a tendency to spurious quantification, which might be described as an attempt to measure the unmeasurable. To many, it must seem that a hallmark of modern management is decision-making based on aggregate statistics and on the models of human and social interaction that can be derived from them. The educational sector at all levels has suffered from the ills of spurious quantification, which is most crudely represented by educational league tables. It is proposed that the multifarious activities of an educational establishment can be summarised in a single index purporting to represent the quality of its provision, and that this index can be used to determine its ranking in a long list of similar establishments. Aggregate measures of quality or performance are compounded by applying differential weights to incommensurable quantities and by adding them together. Chalk is mixed with cheese in arbitrary proportions to produce an indigestible amalgam.

For civil servants and administrators, the advantage of such summary measures lies in their ability to simplify the decision-making process, which often concerns financial and other kinds of support that must be given to the institutions. The statistics allow individual institutions to be removed from view and allow remote and uninformed decisions to be taken without any attendant qualms. I sometimes wonder whether the decision-makers would satisfy what I describe as the inverse Turing test—can they be clearly distinguished from robots? The onus of gathering the information that gives rise to the spurious quantification, or of producing accompanying self-justifications, falls upon the institutions in question. The demands can become so great as to impede their proper functioning.

For a long time, the primary and secondary tiers of our educational system have been subject to decisions arising out of their rankings. More recently, our universities have been subject to the same methodology. I have a clear view of the consequences, which I consider to be disastrous. The emphasis on statistical analyses has, of course, been fostered by the availability of computers. The lack of quantitative skills on the part of those who handle the information and their inability properly to interrogate it is a major hazard. The problem has been highlighted by the noble Earl, Lord Erroll.

Had I time to describe them fully I would dwell at length on some of the fiascos that have arisen from the Government’s attempt to adopt computerised information processing. One of the most prominent examples concerns the initial attempt by the NHS, some years ago, to create an integrated system of patient record-keeping. A large and unrecoverable sum of money was given to an American software company, which created nothing of any use. The episode illustrated one of the hazards of outsourcing. It was proposed that it would be far more efficient for the organisation to use the services of experts in matters of computing than to rely upon its own expertise. However, if there are no resident experts within an organisation, then it is usually incapable of assessing its own needs, or of envisaging a means of satisfying them. In that case, it is liable to be vulnerable to confusion and exploitation. The noble Lord, Lord Kakkar, talked eloquently on that issue.

To those with whom I am serving on a Lords Finance Bill Sub-Committee, it seems clear that HM Revenue and Customs is in the act of creating a similar fiasco in its programme for making tax digital. It seems to me that, far from being new and unprecedented, the principal hazards of artificial intelligence are both familiar and mundane. They will be overcome only when we face up to the need to devote far more resources to enhancing the mathematical, the quantitative and the computer skills of our nation. The issue is a perennial one: are we to be the masters of our technology or its slaves?