Biometric Recognition Technologies in Schools Debate

Full Debate: Read Full Debate

Baroness Falkner of Margravine

Main Page: Baroness Falkner of Margravine (Crossbench - Life peer)

Biometric Recognition Technologies in Schools

Baroness Falkner of Margravine Excerpts
Thursday 4th November 2021

(2 years, 6 months ago)

Lords Chamber
Read Full debate Read Hansard Text
Baroness Falkner of Margravine Portrait Baroness Falkner of Margravine (CB)
- Hansard - -

My Lords, although I chair the Equality and Human Rights Commission, I emphasise that I am speaking in a personal capacity today. Not only that—I am speaking as a new entrant to this area, so I am particularly grateful to the noble Lord, Lord Clement-Jones, for securing this debate and spelling out the risks so clearly to a lay person such as me.

The one point where I will interject the EHRC into this discussion is to tell the House that in our new strategic plan, which commences in 2022 and runs until 2025, we have decided that one of our workstreams should focus on AI and associated technologies. We took this decision earlier this year, for several reasons. The regulatory space is very fragmented and inadequate, in our view. While developments in technology are transforming people’s lives for the better, the impacts are not yet well understood and, where they are, we are starting to see the harmful impact that some technologies have on individuals’ equality and human rights.

As the regulator of the public sector equality duty, as well as human rights law, the EHRC is taking an active interest in the discriminatory and potentially biased outcomes that some of these technologies have for the legal protections afforded to people, particularly on the basis of race and sex. We are seeing increasing numbers of cases involving race and technology, where it is alleged that facial recognition technology has failed—not least in the Uber cases supported by the EHRC, in which two drivers are taking the company to court on the basis that they have lost their jobs because the technology failed to recognise them as a form of ID when they were signing on to work. For women, we know that it is more inaccurate when you add being female to having darker skin. Therefore, the potential for inaccuracy increases. The danger of discrimination against these groups is very much on our radar.

On today’s topic, I share many of the concerns already voiced. I therefore join others in welcoming the belated climbdown from Facebook, which is deleting 1 billion facial recognition templates and shutting down the features that automatically recognise people in photos. Like the noble Lord, Lord Vaizey, I wondered what brought it to time this announcement so carefully in the light of the noble Lord, Lord Clement-Jones, securing this debate. I fear it was Mammon rather than good intention that took it to this point.

Of course, the fact that Facebook is doing this is not sufficient. It will keep to itself the power to use the technology when it sees fit—verifying identities or unlocking hacked phones, it tells us. Troublingly, according to the Financial Times, the algorithm behind the technology, DeepFace, which has been trained using the data of 1 billion scans, will remain extant, to be deployed elsewhere for future products, most likely in the metaverse—so very similar, in my mind, to Covid and the whack-a-mole strategy. What we know from that was that Covid kept popping up in different variants in different times and places. Watch this space with DeepFace.

I note too the broader question as to why we have arrived at a situation in which it is left to private companies to decide when their technology is too harmful, or perceived to be, and autonomously decide to limit its use. Where in the regulatory space will it be decided that DeepFace’s algorithm can and should use the data still held?

On the exploitation of children, we have suspected for years that the social media firms do not have the safety of children uppermost in their minds, and this has been palpably brought home in Frances Haugen’s testimony in the past few weeks. What is worrying in the decision in Scotland to allow the use of FRT in nine schools is that it was to be deployed merely as a post-Covid efficiency measure. I do not think I am alone in this House in thinking that we will spend years undoing moves introduced during Covid that are allowed to remain on the statute book until we find that they are being used in a wholly disproportionate manner in terms of equality and human rights. In plain English, schools would have been better advised to improve the take-up of the vaccine among their children as a post-Covid measure if they really wanted children to mingle more safely while waiting for meals. I welcome the Information Commissioner’s intervention in this matter. There appear to be different approaches to solving the problem that may well be more proportionate than holding the biometric data of children who will almost certainly not be aware of the implications of their consent for privacy at this point in time.

I will end with a few words on the broader importance of being vigilant to emerging technologies. For the very first time, we are in a position in which decisions that affect all aspects of our lives are being taken in the absence of an accountable and identifiable human in the frame. Our legal systems around the world still rest on the assumptions that we can identify a decision-maker and hold them accountable. They are not designed to hold machines accountable, especially where the originator of the learning—so to speak—is well removed from its usage. We are increasingly entering a world in which finding the human behind the decision is impossible for ordinary people seeking redress.

I end by asking the Minister whether she agrees that what is needed is to strengthen existing protections for this AI-driven world that offer clear legal remedies for people wronged that go beyond data privacy and allow us to know as a matter of right who holds what data on us, how it is being used and, importantly, how much is being transferred, at what profit, to others without our knowledge. Will the Government put in place legal protections that make it clear when an algorithm is being used to take decisions about us and what data lies behind those decisions? Most importantly, will senior managers need to be made accountable for flawed decisions by their systems and organisations, with clear remedies available for those on the losing end of those decisions?

I fear that the Government will respond with platitudes about their new determination to regulate in this space. I think we are past the point of determination and now need to find evidence of a readiness to confront this challenge.