3 Lord Giddens debates involving the Department for Digital, Culture, Media & Sport

Algorithms: Public Sector Decision-making

Lord Giddens Excerpts
Wednesday 12th February 2020

(4 years, 8 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Giddens Portrait Lord Giddens (Lab)
- Hansard - -

My Lords, as we have six minutes, let me also congratulate the noble Lord, Lord Clement-Jones, on having introduced this debate so ably and say what an excellent and, if I might say so, affable chairman he was of the AI committee.

AI and machine learning are on the front line of our lives wherever we look. The centre for disease control in Zhejiang province in China is deploying AI to analyse the genetic composition of the coronavirus. It has shortened a process that used to take many days to 30 minutes. Yet we—human beings—do not know how exactly that outcome was achieved. The same is true of AlphaGo Zero, which famously trained itself to beat the world champion at Go, with no direct human input whatever. That borders on what the noble Baroness, Lady Rock, said. Demis Hassabis, who created the system, said that AlphaGo Zero was so powerful because it was

“no longer constrained by the limits of human knowledge.”

That is a pretty awesome statement.

How, therefore, do we achieve accountability, as the Commons report on algorithms puts it, for systems whose reasoning is opaque to us but that are now massively entwined in our lives? This is a huge dilemma of our times, which goes a long way beyond correcting a few faulty or biased algorithms.

I welcome the Government’s document on AI and the public sector, which recognises the impact of deep learning and the huge issues it raises. California led the world into the digital revolution and looks to be doing the same with regulatory responses. One proposal is for the setting up of public data banks—data utilities—which would set standards for public data and, interestingly, integrate private data accumulated by the digital corporations with public data and create incentives for private companies to transfer private data to public uses. There is an interesting parallel experiment going on in Toronto, with Google’s direct involvement. How far are the Government tracking and seeking to learn from such innovations in different parts of the world? This is a global, ongoing revolution.

Will the Government pay active and detailed attention to the regulation of facial recognition technology and, again, look to what is happening elsewhere? The EU, for example—with which I believe we used to have some connection—is looking with some urgency at ways of imposing clear limits on such technology to protect the privacy of citizens. There is a variety of cases about this where the Information Commissioner, Elizabeth Denham, has expressed deep concern.

On a more parochial level, noble Lords will probably know about the furore around the use of facial recognition at the King’s Cross development. The cameras installed by the developer at the site incorporated facial recognition technology. Although limited in nature, it had apparently been in use for some while.

The surveillance camera code of practice states:

“There must be as much transparency in the use of a surveillance camera system as possible”.


That is not the world’s most earth-shattering statement, but it is important. The code continues by saying that clear justification must be offered. What procedures are in place across the country for that? I suspect that they are pretty minimal, but this is an awesome new technology. If you look across the world, you can see that authoritarian states have an enormous amount of day-to-day data on everybody. We do not want that situation reproduced here.

The new Centre for Data Ethics and Innovation appears to have a pivotal role in the Government’s thinking. However, there seems to be rather little detail about it so far. What is the timetable? How long will the consultation period last? Will it have regulatory powers? That is pretty important. After all, the digital world moves at a massively fast pace. How will we keep up?

Quite a range of bodies are now concerned with the impact of the digital revolution. I congratulate the Government on that, because it is an achievement. The Turing Institute seems well out in front in terms of coherence and international reputation. What is the Minister’s view of its achievements so far and how do the Government see it meshing with this diversity of other bodies that—quite rightly—have been established?

Artificial Intelligence

Lord Giddens Excerpts
Thursday 26th April 2018

(6 years, 6 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Ashton of Hyde Portrait Lord Ashton of Hyde
- Hansard - - - Excerpts

The development of weapons generally is a very dangerous thing. We consider that the existing provisions of international humanitarian law are sufficient to regulate the use of weapons systems which might be developed in the future as they have been flexible enough in the past to cope with the invention of new means of warfare such as submarines and aeroplanes, but we are obliged to determine whether new weapons or means comply with international law. We will continue to engage with the UN on this point. We bear it in mind; we understand the implications of it, and we will remain within international law as it stands.

Lord Giddens Portrait Lord Giddens (Lab)
- Hansard - -

My Lords, my noble friend stole my thunder a bit. In the way in which AI is described here, it sounds very benign. It is indeed important to innovation in the future, but it is stuffed with risks and dangers wherever you look, from labour markets to weaponry and all sorts of other areas. It is a huge mix of advantages and massive problems. I would like at least some comment on how the Government will deal with them.

The Statement repeated the idea that AI will inevitably increase productivity. I know where the statistics come from. I am deeply sceptical about them. The advance of the digital revolution so far has been associated with declining, rather than increasing, productivity. We have to be careful not to see some magic in all this which may not be there, which would then bring us back to the problems and dangers.

Lord Ashton of Hyde Portrait Lord Ashton of Hyde
- Hansard - - - Excerpts

The Statement said that AI had the potential to bring about a massive increase in productivity. In some areas, it will, as case studies show. For example, KLM doubled the number of text-based customer inquiries it handled during the past year while increasing the number of agents by 6%, so it is possible. I understand that there will be disruption in jobs because there will be probably be an increase in the number of high-value jobs. It will have implications. Overall, we think that it has the potential to raise productivity if it is handled properly, and by quite a lot. However, we accept that it has problems. We have to encourage such things as lifetime learning to enable people to transfer their skills so that they can contribute in a more modern way.

We accept that there are problems and dangers. That is one reason why we will have the centre for data ethics and innovation: so that we can bring in independent people to advise the Government on where regulation will be necessary and how regulations and laws should be developed. We are addressing that. The AI council will also inform government, because it will not just be government mandating from the centre; it will be a place where academia, the sector, industry and government can come together to drive the changes in the future.

Digital Understanding

Lord Giddens Excerpts
Thursday 7th September 2017

(7 years, 1 month ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Giddens Portrait Lord Giddens (Lab)
- Hansard - -

My Lords, I join other noble Lords in congratulating the noble Baroness, Lady Lane-Fox, on her excellent opening speech and her extraordinary career so far. Apropos of what the noble Lord, Lord Sugar, said, I note that quite a few noble Lords were looking at their devices while he was speaking, and he has so far looked at his device three times since he finished speaking.

Do we need improved digital understanding at all levels of our society? You bet we do. I completely buy the distinction made by the noble Baroness between digital skills and digital understanding, and digital understanding is absolutely central to the next few years in our society and in the world at large. The digital revolution is a huge wave of change breaking across the world and transforming our largest institutions but also intimate aspects of our personal lives. The digital revolution is not the internet; the digital revolution is not robotics; the digital revolution is not awesome algorithmic or supercomputing power. It is all three of these, producing a pace of change unknown previously. The pace of change today far outstrips the industrial revolution and it is far more immediately global. It is a whole new world, which we are being plunged into at almost the speed of light. As other noble Lords have said, it is a vast mixture of opportunities and threats. The opportunities are very large. Consider, for example, the overlap between supercomputing power and genetics. Genetics is simply information, and as supercomputers deal in the awesome power of information, there will be fantastic advances in medicine, but the threats are just as large and are everywhere.

I have three quick points. First, the huge digital corporations must be held to account in relation to democratic processes and concerns, and this must happen quickly. Our lives have been invaded. Data are kept, in enormous amounts, on all of us. We cannot simply accept this as it stands. Secondly, as citizens, we cannot just sit back and accept a situation where human beings are programmed out of key technologies. Smart machines can be designed either to replace us or to enhance and extend our capabilities. When it comes to the distinction between AI and what has been called IA—intelligence augmentation—we should push for the second of these. This is a very serious issue. Thirdly, direct human contact should be preserved and sometimes reintroduced. “Back to the future” is a good way of handling advanced technologies. Let us reintroduce human contact wherever we can where at the moment we have robotic automated voices. Let us contain and humanise the robots.