Data Protection and Digital Information (No. 2) Bill (Second sitting) Debate

Full Debate: Read Full Debate
Rebecca Long Bailey Portrait Rebecca Long Bailey (Salford and Eccles) (Lab)
- Hansard - - - Excerpts

Q In relation to medical research, concerns have been raised that the Bill might risk a divergence from current EU adequacy and that that might have quite a significant detrimental impact on collaboration, which often happens across the EU on medical research. Are you concerned about that, and what should the Government do to mitigate it?

Jonathan Sellors: I think that it is absolutely right to be concerned about whether there will be issues with adequacy, but my evaluation, and all the analysis that I have read from third parties, particularly some third-party lawyers, suggests that the Bill does not or should not have any impact on the adequacy decision at all—broadly because it takes the sensible approach of taking the existing GDPR and then making incremental explanations of what certain things actually mean. There are various provisions of GDPR—for example, on genetic data and pseudonymisation—that are there in just one sentence. It is quite a complicated topic, so having clarification is thoroughly useful, and I do not think that that should have any impact on the adequacy side of it. I think it is a very important point.

Tom Schumacher: I agree that it is a critical point. I also feel as though the real value here is in clarifying what is already permitted in the European GDPR but doing it in a way that preserves adequacy, streamlines and makes it easier for all stakeholders to reach a quick and accurate decision. I think that adequacy will be critical. I just do not think that the language of the text today impacts the ability of it to be adequate.

Chi Onwurah Portrait Chi Onwurah (Newcastle upon Tyne Central) (Lab)
- Hansard - -

Q I know that you are very supportive of the Bill, but I wonder whether you see risks to patients and service users from facilitating a greater sharing of health and care data. Could you each answer that question?

Jonathan Sellors: I think that data sharing, of one sort or another, absolutely underpins medical research. You need to be able to do it internationally as well; it is not purely a UK-centric activity. The key is in making sure that the data that you are using is properly de-identified, so that research can be conducted on patients, participants and resources in a way that does not then link back to their health data and other data.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q So it has to be de-identified. We will return to that. But you do not see any other risks?

Jonathan Sellors: Let me put it this way: poor-quality research, undertaken in an unfortunate way, is always going to be a problem, but good-quality research, which has proper ethical approval and which is done on data that is suitably managed and collated, is an essential thing to be able to do.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q I agree with you. Sorry, I did not quite hear what you said—approval by whom?

Jonathan Sellors: Approval by the relevant ethics committee.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Right. Is it a requirement of the Bill that the research should have the approval of the relevant ethics committee?

Jonathan Sellors: I do not think that it is a requirement of this Bill, but it is a requirement of pretty much most research that takes place in the UK.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q But not all research, surely, because the definition of research is something that can

“reasonably be described as scientific”

research. You would see concerns, then, if data was to be shared for research that was carried out outside of ethics committee approvals. I do not want to put words into your mouth, but I am just trying to understand.

Jonathan Sellors: Sure. I think it depends on the nature of the data that you are trying to evaluate. In other words, if you are looking at aggregated or summary datasets, I do not think there is any particular issue, but when you are looking at individual-level data, that has to be suitably de-identified in order for research to be safely conducted.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q On the point of de-identifying or pseudonymisation, do you recognise that there have been examples of pseudonymised data that has been re-identified, and that, particularly given the rise of huge datasets, artificial intelligence and so on, there is a risk of un-de-identifying pseudonymised data?

Jonathan Sellors: There is always a risk, but I think the way it is expressed in the Bill is actually quite measured. In other words, it takes a reasonable approach to what steps can constitute re-identification. There are a certain police-related examples whereby samples are found on crime scenes. The individuals can be identified, certainly, if you are on the police database, but if they are not on a reference database, it is extremely difficult to re-identify them, other than with millions of pounds-worth of police work. For all practical purposes, it is actually de-identified. Saying something is completely de-identified is quite difficult.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Yes, I certainly agree with that—it is almost impossible—but I do think it is possible to re-identify data without spending millions of pounds, especially when it is correlated with other large datasets. Would you recognise that?

Jonathan Sellors: I definitely recognise that. That is one of our principal bits of concern, but usually the identifiers are the relatively simple ones. In other words, you can re-identify me quite easily by my seven-digit postcode and my age and my gender. Obviously, when we release data, we make sure not to do that. Releasing quite a big bit of my genetic sequence does not make me re-identifiable.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Currently.

Jonathan Sellors: Currently—I accept that.

Tom Schumacher: I would say a couple of things. It is important to know that the Bill preserves the full array of safeguards in the GDPR around data minimisation, access controls and making sure that you have de-identified the data as much as possible for the purpose you are going to use it for. The opportunity that our company is quite concerned about is that, without some elements of real-world data, we are not going to be able to eliminate the bias that we see in the system. We are not going to be able to personalise medicine, and we are not going to be able to get our products approved, because our regulating bodies are now looking at and mandating that the technology we use is tested in different attributes that are relevant for that technology.

As an example, there are very few data pieces that we need for our digital surgery business, but we might need gender, weight and age. The Bill will allow customisation to say, “Okay, what are you going to do to make sure that only two or three data scientists see that data? How are you going to house it in a secure, separate environment? How are you going to make sure that you have security controls around that?” I think the Bill allows that flexibility to try to create personalised medicine, but I do not believe that the Bill opens up a new area of risk for re-identification provided that the GDPR safeguards remain.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Let me ask a follow-up question. I recognise that your intent in research is ethical—there are ethics committees involved. Given the definition of scientific research to be anything that can be reasonably described as scientific, what is to stop data being shared for the purposes of, for example, justifying anti-covid vaccination conspiracy theories? Do you recognise that there are purposes that could be described as research but which many people would not want their data to be used for?

Tom Schumacher: In isolation, that would be a risk, but in the full context of the interrelationship between the data owner and controller and the manufacturer, there would be a process by which you would define the legitimate use you are going to use that data for, and that would be something that you would document and would go on your system. I do not believe that using data for political purposes would constitute research in the way that you would think about it in this Bill. Certainly the UK ICO is well regarded for providing useful interpretation guidance. I think that that office would be able to issue appropriate guardrails to limit those sorts of abuses.

Jonathan Sellors: If you look at a scientific hypothesis, it might not be a scientific hypothesis that you like, but it is much better to have it out there in the public domain, where the data that underpins the research can be evaluated by everybody else to show that it is not sound and is not being conducted appropriately.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Yes, but people might not want their data to be used for that. They would have no control over it in this case.

Jonathan Sellors: There has to be some element of scientific flexibility, but scientists themselves have to be able to make a decision about what they wish to investigate. The main thing to ensure is that it is transparent—in other words, somebody else can see what they have done and the way in which they have done it, so that if it does come up with a conclusion that is fundamentally flawed, that can be properly challenged.

None Portrait The Chair
- Hansard -

If there are no further questions, may I thank both of you gentlemen very much indeed for your time this afternoon and for giving us your evidence. It is hugely appreciated. We now move on to the sixth panel.

Examination of Witnesses

Harry Weber-Brown and Phillip Mind gave evidence.

--- Later in debate ---
Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

I think the Estonian digital ID model works in a very similar way.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q You have both spoken very passionately, if I may say so, about the importance of citizens being in control of their data, particularly with open banking. We all take very seriously our financial data and the importance of trust and empowerment in these services. Can you say how the Bill will improve trust and control for citizens, or how it should do so?

Harry Weber-Brown: Part 2 of the Bill sets out the trust framework, which was being developed by the then Department for Digital, Culture, Media and Sport and which now comes under the Department for Science, Innovation and Technology. It will give certainty to the marketplace that any firm that wishes to store data—what is commonly known as an identity provider—will have to go through a certification regime. It will have to be certified against a register, which means that as a consumer I will know that I can trust that organisation because it will be following the trust framework and the policies that sit within it. That is critical.

Similarly, if we are setting up schemes with smart data we will need to make sure that the consumer is protected. That will come through in secondary legislation and the devil will be in the detail of the policies underpinning it, in a similar way to open banking and the pensions dashboard.

Further to the previous session, the other thing I would say is that we are talking on behalf of financial services, but parts 2 and 3 of the Bill also refer to other sectors: they apply equally to health, education and so on. If as a consumer I want to take more control of my data, I will want to be able to use it across multiple services and get a much more holistic view not just of my finances, but of my health information and so on.

One area that is particularly developing at the moment is the concept of self-sovereign identity, which enables me as a consumer to control my identity and take the identity provider out of the equation. I do not want to get too technical, but it involves storing my information on a blockchain and sharing my data credentials only when I need to do so—obviously it follows data minimisation. There are evolving schemes that we need to ensure the Bill caters for.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Thank you very much for those points.

You mentioned data verification services. Briefly, can you help the Committee to understand who would be providing those services and who would be paying for them? You gave the example of tethering my property or other ownership. Who would be paying in that case? Would I be paying for the rest of my life to keep that data where it is? How do you see it working?

Phillip Mind: Who will provide the services? There is already a growing list of verified providers. There is a current market in one-off digital identity services, and I think many of those providers would step in to the reusable digital identity market.

What is the commercial model? That is a really good question, and frankly at this point I do not have an answer. That will evolve, but within the frameworks that are set up—trust schemes, in the jargon—there will be those who provide digital identity services and those organisations that consume them, which could be retailers, financial services providers or banks. It is likely that the relying parties, the consumers, would pay the providers.

Harry Weber-Brown: But not the individual consumers. If you wanted to open a bank account, and the bank was relying on identity measures provided by fintech, the bank would pay the fintech to undertake those services.

None Portrait The Chair
- Hansard -

We have time for a very quick question from Rupa Huq, with very quick answers.

--- Later in debate ---
Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

Q I want to follow on from the Minister’s questions. Looking at other legislation that is going through Parliament, particularly the anti-fraud provisions in the Online Safety Bill, one of the important areas is the extent to which regulators should expect companies to have good upstream solutions in place to combat fraud. Rather than chasing every example that they come across, they need things that block it in the first place. Do you see the provisions in this Bill as being helpful? Would you expect regulators to act on that and to direct companies to use systems that are known to be safe?

Keith Rosser: Absolutely. I will give a quick example relating to the Online Safety Bill and hiring, which I am talking about. If you look at people getting work online by applying through job boards or platforms, that is an uncertified, unregulated space. Ofcom recently did research, ahead of the Online Safety Bill, that found that 30% of UK adults have experienced employment scams when applying for work online, which has a major impact on access to and participation in the labour market, for many reasons.

Turning the question the other way around, we can also use that example to show that where we do have uncertified spaces, the risks are huge, and we are seeing the evidence of that. Specifically, yes, I would expect the governance body or the certification regime, or both, to really put a requirement on DVSs to do all the things you said—to have better upstream processes and better technology.

Also, I think there is a big missing space, given that we have been live with this in hiring for eight months, to provide better information to the public. At the moment, if I am a member of the public applying for a job and I need to use my digital identity, there is no information for me to look at, unless the employer—the end user—is providing me with something up front. Many do not, so I go through this process without any information about what I am doing. It is a real missed opportunity so far, but now we can right that to make sure that DVSs are providing at least basic information to the public about what to do, what not to do, what questions to ask and where to get help.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Thank you very much for your evidence so far. It is going to be informative about the use of digital ID in recruitment. You said earlier that it helps to separate away from geography, which implied that the digital ID did not reference the location or the home address of the person who was being ID’d. What does the digital ID ID? Part of the reason behind that question is this: is it simply providing identification, or could it also be used as part of the triage process? Can that be done algorithmically, with some of the dangers that we see in algorithmic, automated decision making?

Keith Rosser: Those are several really good questions. I will use an example about location from the other perspective, first of all. At the moment, Home Office policy has not caught up with digital identity, and we are addressing that. There is a real opportunity to right that. It means that one in five work seekers right now cannot use digital identity to get a job, because they do not have an in-date British or Irish passport. If you have a visa or an in-date British or Irish passport, that is fine, but if you are among the one in five people in the country who do not have an in-date passport, you cannot. Those people have to visit the premises of the employer face to face to show their documents, or post their original documents across the UK.

This has really created a second-class work seeker. There are real dangers here, such as that an employer might decide to choose person one because they can hire them a week faster than person two. There is a real issue about this location problem. Digital identity could sever location to allow people more opportunities to work remotely across the UK.

There were really good questions about other information. The Bill has a provision for other data sharing. Again, there is the potential and the opportunity here to make UK hiring the fastest globally by linking other datasets such as HMRC payroll data. Rather than looking at a CV and wondering whether the person really worked in those places, the HMRC data could just confirm that they were employed by those companies.

There is a real opportunity to speed up the verification but, as I want to acknowledge and as you have referred to, there is certainly also a risk. Part of our mission is to make UK hiring fairer, not just faster and safer. I want to caution against going to a degree of artificial intelligence algorithmic-based hiring, where someone is not actually ever in front of a human, whether by Teams video or in person, and a robot is basically assessing their suitability for a job. We have those risks and would have them anyway without this Bill. It is really important as we go forward that we make sure we build in provisions somewhere to ensure that hiring remains a human-on-human activity in some respects, not a completely AI-based process.

None Portrait The Chair
- Hansard -

Mr Rosser, thank you very much indeed for your evidence this afternoon. We are grateful for your time, sir.

Examination of Witnesses

Helen Hitching and Aimee Reed gave evidence.

--- Later in debate ---
Mark Eastwood Portrait Mark Eastwood
- Hansard - - - Excerpts

Q So it would be an advantage for the Government to look into including that.

Aimee Reed: It certainly would. It is not that we cannot do that now; I just think the guidance could be clearer. It would put it into sharper relief if we could release that burden from policing to the CPS and the CPS felt confident that that was within the rules.

Helen Hitching: The agency agrees with that—there would be the same impact.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q I think you implied that there was data that you would like to have access to but currently do not have access to. Can you elaborate on what data you do not have access to in terms of data sharing and the barriers? What would be helpful for investigations?

Aimee Reed: It is not so much about specific datasets; it is about synchronisation and the speed with which you can exchange data that enables you to make better decisions. Because the Data Protection Act is split into three parts, and law enforcement quite rightly has a section all of its own, you cannot utilise data analytics across each of the parts. Does that make sense? If we wanted to do something with Driver and Vehicle Licensing Agency data and automatic number plate recognition data, we could not join together those two large datasets to enable mass analysis because there would be privacy rights considerations. If want to search datasets from other parts of that Act, we have to do that in quite a convoluted administrative way that perhaps we can share within law enforcement. It is more about the speed of exchange.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Is it about the speed of exchange with other Government agencies or with local government agencies?

Aimee Reed: It is more with our local partners. I am sure that our partners would say they are equally frustrated by the speed at which they can get data from the police in large datasets to enable them to make better decisions in their local authorities. That is just how that Act was constructed, and it will remain so. The recent ICO guidance on sharing has made that simpler, but this realm of the Bill will not make that synchronisation available to us.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Do you think it should be available to you? Are there reasons why it is not available to you?

Aimee Reed: It is about getting right the balance between what we do with people’s personal data and how the public would perceive the use of that data. If we just had a huge pot where we put everybody’s data, there would be real concerns about that. I am not suggesting for a second that the police want a huge pot of everybody’s data, but that is where you have to get the balance right between knowing what you have and sharing it for the right purpose and for the reason you collected it in the first place.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Just to follow up on the questions about the different types of regulation, do you feel that the balance has been struck appropriately when it comes to biometric data, particularly for facial recognition, for example?

Helen Hitching: Sorry—could you repeat that?

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Has the balance between sharing and the regulation of biometric data, particularly facial recognition data, been struck in the right way?

Helen Hitching: I do not think facial recognition data is captured.

Aimee Reed: On facial recognition, given that we have deployed it—very high profile—I think that the balance is right. We have learned a lot from the South Wales judgment and from our own technical deployments. The Bill will also highlight how other biometric data should be managed, creating parity and an environment where biometric data that we do not yet have access to or use of is future-proofed in the legislation. That is really welcome.

Rebecca Long Bailey Portrait Rebecca Long Bailey
- Hansard - - - Excerpts

Q Helen, you mentioned that you are broadly supportive of the abolition of the Biometrics Commissioner and the Surveillance Camera Commissioner, but that that abolition will not reduce the existing level of oversight. Now seems to be the time to request additional resources if you did not feel that the new commissioners would be adequately resourced, so do you have confidence that the Investigatory Powers Commissioner has sufficient resources and expertise to take on the functions it has to? Similarly, does the Information Commissioner have sufficient resources and expertise to oversee regulation in this area?

Helen Hitching: It is difficult for the agency to comment on another organisation’s resources and capabilities. That question should probably be posed directly to them. The Information Commissioner’s Office already deploys resources on issues related to law enforcement data processing, including the publication of guidance. From a biometrics perspective, the casework is moving to the IPC, so from a resourcing perspective I think it would have adequate casework provision and expertise.

Aimee Reed: I echo the comments about expertise, particularly of the Investigatory Powers Commissioner. I think that the expertise exists but, like Helen, whether it has enough resources to cope with the casework I presume is a demand assessment that it will do in response to the Bill.

--- Later in debate ---
Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

Q The issue is not different rates of pay per task, but the amount of paid work that someone might get within a period.

Mary Towers: Yes. Drivers are a good example. People drive a certain distance to pick people up or deliver items. Even when the driving time is exactly the same, people may be paid different rates, because the algorithm will have worked out how long certain groups of people are likely to wait before they accept a gig, for example. I emphasise that, in our view, those sorts of issues are not restricted to the gig economy; they spread way beyond it, into what one might consider to be the far more traditional professions. That is where our red lines are. They relate to transparency, explainability, non-discrimination and, critically, worker and union involvement at each stage of the AI value chain, including in the development of that type of app—you mentioned development. Unless the worker voice is heard at development stage, the likelihood is that worker concerns, needs and interests will not be met by the technology. It is a vital principle to us that there be involvement of workers and unions at each stage of the AI value chain—in development, application and use.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Welcome to both of you. Apologies for my misuse of my own technology earlier.

The Minister talked about the need for growth, which has been sadly lacking in our economy for the last 13 years. Obviously, technology can make huge improvements to productivity for those in the workforce. Mr Pakes, as someone whose members are involved in technology, scientific and IT organisations, I wonder whether you would agree with this, which comes from my experience in the diffusion of technology. Is it possible to get the best from technology in an organisation or company without the people who will be using it, or the people on whom it will be used, being an active part of that diffusion of technology, and understanding and participating in its use?

Andrew Pakes: Absolutely. That has always been how productivity has improved or changed, in effect, the shop floor. If you are asking, “What problems are you using technology to solve?”, it may well be a question better asked by the people delivering the product or service than necessarily the vendor selling the software, whether that is old or new technology. I encourage the Committee to look at the strong evidence among our competitors who rate higher, in terms of productivity and innovation, than the UK, where higher levels of automation in the economy are matched by higher levels of worker participation. Unions are the most common form, but often it can be works councils or small businesses in terms of co-design and collaboration. We see that social partnership model of the doers, who identify and solve problems, being the people who do that.

We have good examples. We represent members in the nuclear sector who are involved in fusion, small modular reactors or other technology, where the employer-union relationship is critical to the UK’s intellectual property and the drive to make those successful industries. In the motor industry and other places where the UK has been successful, we can see that that sense of social partnership has been there. We have examples around using AI or the monitoring of conversations or voices. Again, I mentioned GPS tracking, but in safety-critical environments, where our members want to be kept safe, they know that technology can help them. Having that conversation between the workforce and the employer can come up with a solution that is not only good for our members, because they stay safe and understand what the safety regime is, but good for the employer, because days are not lost through illness or accidents. For me, that sense of using legislation like this to underpin good work conversations in the data setting is what the mission of this Bill should be about.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q In terms of data sharing, should there be provisions in the Bill to ensure that workers can give free and informed consent to the sharing of their data, or will the asymmetry of the relationship in the employment contract make that challenging?

Andrew Pakes: We think there should be a higher bar, because of the contractual nature. Whether it is self-employed workers contracting for a piece of work or an employment relationship, there is a fundamental difference in our view between my individual choice to go online and enter my data into a shop, because I want to be kept appraised of when the latest product is coming out—it is my free choice to do that—and my being able to consent in an employment relationship about how my data is used. As Mary said, the foundation stone has to be transparency on information in the first place. Beyond that, there should be negotiation to understand how that data is used.

The critical point for us is that most companies in the UK are not of a size where they will be developing their own AI products—very few will be; we can probably name a couple of them. Most companies using automated decisions or AI will be purchasing that from a global marketplace. We hope many of them will be within certain settings, but we know that the leaders in this tend to be the Chinese market and the US market, where they have different standards and a range of other things. Ensuring that we have UK legislation that protects that level of consent and that redresses that power balance between workers and employers is a critical foundation to ensuring that we get this right at an enterprise level.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q Have you identified any provisions to achieve that in the Bill as it stands?

Andrew Pakes: We would like to see more. We are worried that the current legislation, because of things such as DPIAs, drops that level of standards, which means that the UK could end up trading on a lower standard than other countries, and that worries us.

Mary Towers: We are also concerned about the change to the test for international data transfers, which might make the requirements less restrictive. There is a change from adequacy to a more risk-based assessment process in terms of international data transfers. Again, we have very similar concerns to Andrew about the use of technologies rooted in international companies and the inevitable international transfers of data, and workers essentially losing control over and knowledge of what is happening with their data beyond the workplace.

In addition, I would also like to make a point about the importance of transparency of source code, and the importance of ensuring that international trade deals do not restrict that transparency, meaning that workers cannot access information about source code once data and AI-powered tools are rooted in other countries.

Mark Eastwood Portrait Mark Eastwood
- Hansard - - - Excerpts

Q I would like to declare, again, that I am a member of Prospect, and therefore I have a bit of skin in the game on this one. You mentioned GPS and surveillance technology. Very quickly, could you give me an idea of the current scale of that? Are the majority of employers going down this route? If this Bill is pushed through, could you give me an idea of how usage could increase or decrease, depending on how you see the outcome of the Bill?

Mary Towers: I will give my statistics very quickly. Our polling revealed that approximately 60% of workers perceived that some form of monitoring was taking place in their workplace. The CEO of IBM told Bloomberg last week that 30% of non-customer facing roles, including HR functions, could be replaced by AI and automation in the next five years.

A recent report from the European Commission’s Joint Research Centre—the “Science for Policy” report on the platformatisation of work—found that 20% of German people and 35% of Spanish people are subject to algorithmic management systems at the moment. Although that is obviously not UK-based, it gives you a very recent insight on the extent of algorithmic management across Europe.

Andrew Pakes: And that matches our data. Around a third of our members say that they are subject to some form of digital monitoring or tracking. That has grown, particularly with the rise of hybrid and flexible working, which we are in favour of. This is a problem we wish to solve, rather than something to stop, in terms of getting it right.

Over the past two years, we have increasingly seen people being performance managed or disciplined based on data collected from them, whether that is from checking in and out of buildings, their use of emails, or not being in the right place based on tracking software. None of the balances we want should restrict the legitimate right of managers to manage, but there needs to be a balance within that. We know that using this software incorrectly can micromanage people in a way that is bad for their wellbeing.

The big international example, which I will give very quickly, is that if you look at a product like Microsoft—a global product—employers will buy it. My work computer has Office 365 on it. Employers get it on day one. The trouble with these big products is that, over time, they add new products and services. There was an example where Microsoft did bring in a productivity score, which could tell managers how productive and busy their teams were. They rowed back on that, but we know that with these big, global software projects—this is the point of DPIAs—it is not just a matter of consultation on day one.

The importance of DPIAs is that they stipulate that there must be regular reviews, because we know that the power of this technology transforms quickly. The danger is that we make life miserable for people who are good, productive workers and cause more problems for employers. It would be better for all of us to solve it through good legislation than to arm up the lawyers and solve it through the courts.

--- Later in debate ---
Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

Q Effectively, you would not need to consider whether the use of that technology in that case was disproportionate to the risk.

Alex Lawrence-Archer: Yes.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q We heard from some witnesses today that greater ease of access to data will increase competition for those such as Google and Meta that have large amounts of data as it is. What do you think the impact of this Bill will be for big tech?

Alex Lawrence-Archer: I think the Bill is quite big tech-friendly, and the way that it deals with research is well illustrative of that. One of the objectives of the Bill is obviously to boost the use of personal data for academic research, which is a really laudable objective. However, the main change—in fact the only change I can think of off the top of my head—that it makes is to broaden the definition of academic research. That helps people who already have lots of personal data they might do research with; it does not help you if you do not have personal data. That is one of the major barriers for academics at the moment: they cannot get access to the data they need.

The Bill does nothing to incentivise or compel data controllers such as online platforms to actually share data and get it moving around the system for the purposes of academic research. This is in stark contrast to the approach being taken elsewhere. It is an issue the EU is starting to grapple with in a particular domain of research with article 40 of the Digital Services Act. There is a sense that we are falling behind a little bit on that key barrier to academic research with personal data.

Chi Onwurah Portrait Chi Onwurah
- Hansard - -

Q We also heard that existing cookie management and subject access requests and so on represent a real burden, particularly for smaller companies. Do you recognise that? Do you know why there is less support in technology to help small businesses deal with, if you like, the data management challenges? How is that to be traded off against the privacy rights of individuals?

Alex Lawrence-Archer: I certainly recognise that the requirements of GDPR place compliance burdens on businesses of all sizes. I am sceptical that the right balance is being struck in trying to ameliorate the burdens of the costs and challenges that ordinary people will face—in terms of knowing how they are being profiled and tracked by companies—and resolving things when they have gone wrong. I am sceptical as well that there will be major benefits to many businesses who will continue to need to do business in Europe. For that reason, we will need either to have dual compliance or simply to continue to comply with EU GDPR. You can see this benefiting the largest companies, which can start to segment their users. We have already seen that with Meta, which moved its users on to US controllership, for example. I would see that as more beneficial to those large companies, which can navigate that, rather than, say, SMEs.

None Portrait The Chair
- Hansard -

Mr Lawrence-Archer, thank you very much for your time this afternoon.

That brings us to the end of our 11th panel. As an impartial participant in these proceedings—we have had over four-and-a-half hours of evidence with 23 witnesses —I would say it has been an absolute masterclass in all the most topical issues in data protection and digital information. Members might not realise it, but that is what we have had today.

Ordered, That further consideration be now adjourned. —(Steve Double.)