Public Authorities (Fraud, Error and Recovery) Bill Debate
Full Debate: Read Full DebateViscount Younger of Leckie
Main Page: Viscount Younger of Leckie (Conservative - Excepted Hereditary)Department Debates - View all Viscount Younger of Leckie's debates with the Department for Work and Pensions
(3 days, 8 hours ago)
Grand CommitteeThe noble Baroness might expect one of us to intervene. I understand where she is coming from in terms of reports, because these amendments are basically focusing on the laying of reports. However, outside the Room I have asked in the past about the current level of fraud. The noble Baroness alluded to it, but perhaps she could confirm that at the moment, the estimated level of public sector fraud stands at £55 billion. I know that I have asked for this before but it would be very helpful to have a breakdown of how much public sector fraud there is when it comes to the DWP aspects of the Bill. I think I am asking about the same issues, but it would be extremely helpful to know where we stand right now as a base, in terms of the level and quantity of fraud, and any breakdowns.
My Lords, I am more than happy to write to the noble Viscount.
My Lords, as part of the unusual alliance, I think that now is a good time to reflect on where we are in the Bill. We are now talking about powers targeted at recipients of universal credit, employment and support allowance, and pension credit. Relevant accounts that can be flagged to the Government include any account
“into which a specified relevant benefit is paid”.
Approximately 9.4 million people are in receipt of a benefit currently specified by the Bill—one in eight people in the UK. This already risks creating a two-tier society in and of itself, in which certain groups are subjected to intrusive financial monitoring by the state while others are not.
I was very pleased to see these two amendments because I worry when I consider that, last year, two-thirds of claims flagged by a DWP algorithm as potentially high-risk were, in fact, legitimate. We are now talking about the use of algorithms in relation to the group of people I talked about, so I am happy to support the noble Lord, Lord Vaux, and the noble Baroness, Lady Bennett of Manor Castle, on Amendments 75A and 79A.
The key thing here is to stress something that has already been discussed at great length throughout our debates on the Bill, which is what we consider “reasonable grounds”. The noble Lord, Lord Vaux, has raised reasonability throughout. Generally, but not consistently, the investigator powers in the Bill are exercisable only when there are reasonable grounds for suspicion that, for example, fraud has been committed. Reasonable grounds are a safeguard to protect individuals from baseless state interference and fishing expeditions. They uphold the rule of law by preventing arbitrary state power but “reasonable” requires clarification once we go into the context of the role of technology, which is at the heart of the Bill; that is one of the reasons why I have put my name to these amendments and will raise other amendments in relation to algorithms later on in Committee.
These amendments are safeguards to ensure accountability; to ensure that we are clear about the basis on which algorithms are used; and to ensure that we do not allow them to become the basis of lazy caricatures and stereotypes. Examples have been given by other speakers on this group, but I anticipate that it is possible that the Government might well cite the Equality Act as a guard against such discrimination. However, it is important to note that, although the Equality Act does lots of very good things, it will not necessarily help us here because not all prejudice is reducible to protected characteristics. In fact, attitudes to people on benefits in general and sections of the white working class do not fit into the Equality Act, so it is important that we do not just rely on another piece of legislation here.
Also, if we are going to say that AI algorithms, into which a potential discriminatory nature can be built—as has already been explained—were to make mistakes and discriminate against any group that is covered by the Equality Act, we would be clogging up the Equality Act with lots of legal challenges based on this Bill. I think that using the “reasonable” test for algorithms and ensuring that there is a commitment to no discrimination on the face of the Bill is a very valuable way of countering that.
My Lords, as the noble Lord, Lord Vaux, said, we are moving towards the DWP elements of the Bill, although I suggest that these particular amendments are more of a hybrid between the Cabinet Office and the DWP. As I think the noble Baroness, Lady Fox, indicated, the DWP elements in scope are universal credit, the ESA and pension credit.
My Lords, it does not look as though we are ending on an easy group for me. Amendments 75A and 79A, tabled by the noble Lord, Lord Vaux of Harrowden, and the noble Baroness, Lady Bennett of Manor Castle, cover the same ground in Parts 1 and 2. The amendments would add a definition of what cannot constitute “reasonable grounds” in the legislation, setting out certain factors that will not constitute reasonable grounds for suspicion.
Although I understand the intention behind the amendments, I want to assure your Lordships that stereotypes and generalisations would not be considered reasonable grounds for starting an investigation or issuing an information notice. Under the information powers, an information notice may be sent only when an authorised officer has reasonable grounds to suspect that a relevant offence has been committed. An authorised officer must genuinely suspect that the fraud has been carried out by the individual, and that belief will be based on an objective assessment of facts, information and/or intelligence. “Reasonable grounds” are a standard test used by other organisations, including the police, and it is clear that they cannot be based on a hunch or the types of personal factors listed in the amendments.
The DWP has well-established safeguards to ensure that this test is applied properly in practice, with authorised officers documenting all reasoning for their decisions, including the basis for their suspicion, and through the Bill the PSFA will implement comparable safeguards. Management checks provide further internal assurance, and both the PSFA and the DWP intend to appoint His Majesty’s Inspectorate of Constabulary and Fire & Rescue Services to independently inspect the use of these powers.
Finally, DWP guidance for authorised officers is also included in the new draft code of practice, which has been made available to noble Peers as a working draft prior to consultation. The PSFA will draft guidance on the lawful use of its information powers, which will cover this issue.
I will review the specific points made, especially regarding automated processes, and will probably end up writing to noble Lords on the questions I do not cover, but I will give a flavour of the Government’s thinking. Do the PSFA or the DWP use automated processes that enable generalisations and stereotypes when gathering information about individuals? No, we do not. The DWP does not use automated processes to decide whether an information notice will be issued, and the PSFA will not do so when the power is granted. An information notice may only ever be issued by an authorised officer, who must carefully consider whether it is necessary and proportionate to do so and document their reasons.
Regarding artificial intelligence in fraud and error, given what is being debated in the Chamber, I feel that we have two AI conversations going on. The DWP has a responsibility to ensure that fraud is minimised so that the right payments are made to the right people at the right time. Fraud controls are vital to reduce waste and protect taxpayers’ money. Advanced analytics, including machine learning, will play a critical role in tackling fraud, error and debt.
There is currently one fraud error and error machine-learning model in full deployment on universal credit advances, and others are at various stages of testing and development, designed to prevent fraud in the highest areas of loss. We have been careful to implement a supervised machine-learning approach and incorporate human intervention to consider the case and make further inquiries if necessary. Our use of advanced analytics does not replace human judgment. The Bill does not introduce automated decision-making.
To improve our approach and assure Parliament and the public of our processes, we intend to develop fairness and analysis assessments, which can be published through the annual report and accounts process. We will ensure that the fairness analysis assessment sets out the rationale for why we judge the models to be reasonable and proportionate, but without divulging the detail of our fraud and error controls, which would put the department’s security at risk.
The noble Viscount will know better than me that two proofs of concept were completed by the last Government on this issue. So there is proof of concept on EVM, but we are clear, especially from the PSFA side, that we will continue with a test and learn approach to this, and will report back with any other developments. As I said, DWP decisions on fraud and error will be made by a human. I will review his other questions to see whether I need to write to him. I hope that that gives a level of reassurance to noble Lords, and that the amendment can be withdrawn.
I appreciate the answers that the Minister has given. I also appreciate that there are more answers to come, but could she add to the answer in writing about the timing for the remaining proofs of concept: when they are going to be completed? I see that as being germane to the rolling out of this process.
My Lords, I will add that to the list of things to write to noble Lords about, if that is okay.