AI in the UK (Liaison Committee Report)

Lord Bishop of Oxford Excerpts
Wednesday 25th May 2022

(2 years, 6 months ago)

Grand Committee
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Bishop of Oxford Portrait The Lord Bishop of Oxford
- Hansard - -

My Lords, it is a pleasure to follow the noble Lord, Lord Evans, and thank him in this context for his report, which I found extremely helpful when it was published and subsequently. It has been a privilege to engage with the questions around AI over the last five years through the original AI Select Committee so ably chaired by the noble Lord, Lord Clement-Jones, in the Liaison Committee and as a founding board member for three years of the Centre for Data Ethics and Innovation. I thank the noble Lord for his masterly introduction today and other noble Lords for their contributions.

There has been a great deal of investment, thought and reflection regarding the ethics of artificial intelligence over the last five years in government, the National Health Service, the CDEI and elsewhere—in universities, with several new centres emerging, including in the universities of Oxford and Oxford Brookes, and by the Church and faith communities. Special mention should be made of the Rome Call for AI Ethics, signed by Pope Francis, Microsoft, IBM and others at the Vatican in February 2020, and its six principles of transparency, inclusion, accountability, impartiality, reliability and security. The most reverend Primate the Archbishop of Canterbury has led the formation of a new Anglican Communion Science Commission, drawing together senior scientists and Church leaders across the globe to explore, among other things, the impact of new technologies.

Despite all this endeavour, there is in this part of the AI landscape no room for complacency. The technology is developing rapidly and its use for the most part is ahead of public understanding. AI creates enormous imbalances of power with inherent risks, and the moral and ethical dilemmas are complex. We do not need to invent new ethics, but we need to develop and apply our common ethical frameworks to rapidly developing technologies and new contexts. The original AI report suggested five overarching principles for an AI code. It seems appropriate in the Moses Room to say that there were originally 10 commandments, but they were wisely whittled down by the committee. They are not perfect, in hindsight, but they are worth revisiting five years on as a frame for our debate.

The first is that artificial intelligence should be developed for the common good and benefit of humanity; as the noble Lord, Lord Holmes, eloquently said, the debate often slips straight into the harms and ignores the good. This principle is not self-evident and needs to be restated. AI brings enormous benefits in medicine, research, productivity and many other areas. The role of government must be to ensure that these benefits are to the common good—for the many, not the few. Government, not big tech, must lead. There must be a fair distribution of the wealth that is generated, a fair sharing of power through good governance and fair access to information. This simply will not happen without national and international regulation and investment.

The second principle is that artificial intelligence should operate on principles of intelligibility and fairness. This is much easier to say than to put into practice. AI is now being deployed, or could be, in deeply sensitive areas of our lives: decisions about probation, sentencing, employment, personal loans, social care—including of children—predictive policing, the outcomes of examinations and the distribution of resources. The algorithms deployed in the private and public sphere need to be tested against the criteria of bias and transparency. The governance needs to be robust. I am sure that an individualised, contextualised approach in each field is the right way forward, but government has a key co-ordinating role. As the noble Lord, Lord Clement-Jones, said, we do not yet have that robust co-ordinating body.

Thirdly, artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. As a society, we remain careless of our data. Professor Shoshana Zuboff has exposed the risks of surveillance capitalism and Frances Haugen, formerly of Meta, has exposed the way personal data is open to exploitation by big tech. Evidence was presented to the online safety scrutiny committee of the effects on children and adolescents of 24/7 exposure to social media. The Online Safety Bill is a very welcome and major step forward, but the need for new regulation and continual vigilance will be essential.

Fourthly, all citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. It seems to me that of these five areas, the Government have been weakest here. A much greater investment is needed by the Department for Education and across government to educate society on the nature and deployment of AI, and on its benefits and risks. Parents need help to support children growing up in a digital world. Workers need to know their rights in terms of the digital economy, while fresh legislation will be needed to promote good work. There needs to be even better access to new skills and training. We need to strive as a society for even greater inclusion. How do the Government propose to offer fresh leadership in this area?

Finally, the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence, as others have said. This final point highlights a major piece of unfinished business in both reports: engagement with the challenging and difficult questions of lethal autonomous weapons systems. The technology and capability to deploy AI in warfare is developing all the time. The time has come for a United Nations treaty to limit the deployment of killer robots of all kinds. This Government and Parliament, as the noble Lord, Lord Browne, eloquently said, urgently need to engage with this area and, I hope, take a leading role in the governance of research and development.

AI can and has brought many benefits, as well as many risks. There is great openness and willingness on the part of many working in the field to engage with the humanities, philosophers and the faith communities. There is a common understanding that the knowledge brought to us by science needs to be deployed with wisdom and humility for the common good. AI will continue to raise sharp questions of what it means to be human, and to build a society and a world where all can flourish. As many have pointed out, even the very best examples of AI as yet come nowhere near the complexity and wonder of the human mind and person. We have been given immense power to create but we are ourselves, in the words of the psalmist, fearfully and wonderfully created.