Debates between Lord Clement-Jones and Lord Deben during the 2019-2024 Parliament

Trade (Mobile Roaming) Regulations 2023

Debate between Lord Clement-Jones and Lord Deben
Tuesday 31st January 2023

(1 year, 9 months ago)

Grand Committee
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Deben Portrait Lord Deben (Con)
- Hansard - - - Excerpts

I wanted to ask my noble friend: what advantage does the mobile telephone user get from us having left the European Union? Is this not a rather pathetic doing of a deal with a few countries, when everybody in Britain suffers from having left the European Union and being charged extra? This deal is just with a couple of countries—even Liechtenstein is left out.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, that was a suitable start to my own small intervention. I will not trouble the Minister for too long but I want to strike a note of genuine regret, rather along the lines of what the noble Lord, Lord Deben, said.

It is a very small crumb of comfort to be faced with this order when previously, right across the EU, there were no roaming charges for consumers. As we saw, last July the EU extended the exemption from roaming charges for another 10 years—an extensive period. I suspect we are all now much more aware of what we have lost as a result of leaving the EU, exactly as the noble Lord mentioned.

There is a small consolation offered in this free trade agreement. I do not know whether any negotiations will ever be underfoot again with the EU about taking advantage of its single market and the resulting lack of roaming charges. Maybe the Minister could say whether any kind of initiative was available.

I have only a couple of questions about these new regulations. The Minister talked about the technicalities of wholesale, retail and so on. Obviously, the retail charges—if any—follow from any wholesale charges. How are these charges to be set? What is the basis for them? Norway and Iceland are limited exemptions. Even Liechtenstein did not feel moved enough to join up to this great roaming exemption. Why has Liechtenstein excluded itself from this splendid initiative?

Of course, we support these regulations. I welcome particularly that there is a review. I am greatly in favour of government reviewing its own regulations, and the mechanism in Regulation 13 is very useful, but what does the Minister envisage? Do we do this after a couple of years, after five years, this time next year or never? What is the plan? It is useful at least to have in the department’s diary something that says, “Review these Norway and Iceland regulations”, when somebody has the spare time to do it. I hope that consumers will take great benefit from these regulations.

Algorithms: Public Sector Decision-making

Debate between Lord Clement-Jones and Lord Deben
Wednesday 12th February 2020

(4 years, 9 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, first, a big thank you to all noble Lords who are taking part in the debate this evening.

Over the past few years we have seen a substantial increase in the adoption of algorithmic decision-making—ADM—and prediction across central and local government. An investigation by the Guardian last year showed that some 140 of 408 councils in the UK are using privately developed algorithmic “risk assessment” tools, particularly to determine eligibility for benefits and to calculate entitlements. Data Justice Lab research in late 2018 showed that 53 out of 96 local authorities and about a quarter of police authorities are now using algorithms for prediction, risk assessment and assistance in decision-making. In particular, we have the Harm Assessment Risk Tool—HART—system used by Durham police to predict reoffending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias and discrimination and dubious predictions.

Central government use is more opaque, but HMRC, the Ministry of Justice and the DWP are the highest spenders on digital, data and algorithmic services. A key example of ADM use in central government is the DWP’s much-criticised universal credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group, in its study, Computer Says “No!”, shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

The UN special rapporteur on extreme poverty and human rights, Philip Alston, looked at our universal credit system a year ago and said in a statement afterwards:

“Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.”


These issues have been highlighted by Liberty and Big Brother Watch in particular.

Even when not using ADM solely, the impact of an automated decision-making system across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights. Last March, the Committee on Standards in Public Life decided to carry out a review of AI in the public sector to understand its implications for the Nolan principles and to examine whether government policy is up to the task of upholding standards as AI is rolled out across our public services. The committee chair, the noble Lord, Lord Evans of Weardale, said on publishing the report this week:

“Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”


It found that despite the GDPR, the data ethics framework, the OECD principles and the guidelines for using artificial intelligence in the public sector, the Nolan principles of openness, accountability and objectivity are not embedded in AI governance in the public sector, and should be.

The committee’s report presents a number of recommendations to mitigate these risks, including greater transparency by public bodies in the use of algorithms, new guidance to ensure that algorithmic decision-making abides by equalities law, the creation of a single coherent regulatory framework to govern this area, the formation of a body to advise existing regulators on relevant issues, and proper routes of redress for citizens who feel decisions are unfair.

It was clear from the evidence taken by our own AI Select Committee that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection for those subject to ADM. It contains a right to explanation provision when an individual has been subject to fully automated decision-making, but few highly significant decisions are fully automated. Often it is used as a decision support; for example, in detecting child abuse. The law should also cover systems where AI is only part of the final decision.

The May 2018 Science and Technology Select Committee report, Algorithms in Decision-Making, made extensive recommendations. It urged the adoption of a legally enforceable right to explanation that would allow citizens to find out how machine learning programs reach decisions that affect them and potentially challenge the results. It also called for algorithms to be added to a ministerial brief and for departments to publicly declare where and how they use them. Subsequently, a report by the Law Society published last June about the use of Al in the criminal justice system expressed concern and recommended measures for oversight, registration and mitigation of risks in the justice system.

Last year, Ministers commissioned the AI adoption review, which was designed to assess the ways that artificial intelligence could be deployed across Whitehall and the wider public sector. Yet the Government are now blocking the full publication of the report and have provided only a heavily redacted version. How, if at all, does the Government’s adoption strategy fit with the publication last June by the Government Digital Service and the Office for Artificial Intelligence of guidance for using artificial intelligence in the public sector, and then in October further guidance on AI procurement derived from work by the World Economic Forum?

We need much greater transparency about current deployment, plans for adoption and compliance mechanisms. In its report last year entitled Decision-making in the Age of the Algorithm, NESTA set out a comprehensive set of principles to inform human/machine interaction for public sector use of algorithmic decision-making which go well beyond the government guidelines. Is it not high time that a Minister was appointed, as was also recommended by the Commons Science and Technology Select Committee, with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector and that those standards are set in terms of design, mandatory bias testing and audit, together with a register for algorithmic systems in use—

Lord Deben Portrait Lord Deben (Con)
- Hansard - - - Excerpts

Could the noble Lord extend what he has just asked for by saying that the Minister should also cover those areas where algorithms defeat government policy and the laws of Parliament? I point by way of example to how dating agencies make sure that Hindus of different castes are never brought together. The algorithms make sure that that does not happen. That is wholly contrary to the rules and regulations we have and it is rather important.

Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - -

My Lords, I take entirely the noble Lord’s point, but there is a big distinction between what the Government can do about the use of algorithms in the public sector and what the private sector should be regulated by. I think that he is calling for regulation in that respect.

All the aspects that I have mentioned are particularly important for algorithms used by the police and the criminal justice system in decision-making processes. The Centre for Data Ethics and Innovation should have an important advisory role in all of this. If we do not act, the Legal Education Foundation advises that we will find ourselves in the same position as the Netherlands, where there was a recent decision that an algorithmic risk assessment tool called SyRI, which was used to detect welfare fraud, breached Article 8 of the European Convention on Human Rights.

There is a problem with double standards here. Government behaviour is in stark contrast to the approach of the ICO’s draft guidance, Explaining Decisions Made with AI, which may meet the point just made by the noble Lord. Last March, when I asked an Oral Question on this subject, the noble Lord, Lord Ashton of Hyde, ended by saying

“Work is going on, but I take the noble Lord’s point that it has to be looked at fairly urgently”.—[Official Report, 14/3/19; col. 1132.]


Where is that urgency? What are we waiting for? Who has to make a decision to act? Where does the accountability lie for getting this right?