(3 days, 2 hours ago)
Lords ChamberMy Lords, I thank my noble friend Lady Kidron and the noble Viscount, Lord Camrose, for adding their signatures to my Amendment 14. I withdrew this amendment in Committee, but I am now asking the Minister to consider once again the definition of “scientific research” in the Bill. If he cannot satisfy me in his speech this evening, I will seek the opinion of the House.
I have been worried about the safeguards for defining scientific research since the Bill was published. This amendment will require that the research should be in “the public interest”, which I am sure most noble Lords will agree is a laudable aim and an important safeguard. This amendment has been looked at in the context of the Government’s recent announcements on turning this country into an AI superpower. I am very much a supporter of this endeavour, but across the country there are many people who are worried about the need to set up safeguards for their data. They fear data safety is threatened by this explosion of AI and its inexorable development by the big tech companies. This amendment will go some way to building public trust in the AI revolution.
The vision of Donald Trump surrounded at his inauguration yesterday by tech billionaires, most of whom have until recently been Democrats, puts the fear of God into me. I fear their companies are coming for our data. We have some of the best data in the world, and it needs to be safeguarded. The AI companies are spending billions of dollars developing their foundation models, and they are beholden to their shareholders to minimise the cost of developing these models.
Clause 67 gives a huge fillip to the scientific research community. It exempts research which falls within the definition of scientific research as laid out in the Bill from having to gain new consent from data subjects to reuse millions of points of data.
It costs time and money for the tech companies to get renewed consent from data holders before reusing their data. This is an issue we will discuss further when we debate amendments on scraping data from creatives without copyright licensing. It is clear from our debates in Committee that many noble Lords fear that AI companies will do what they can to avoid either getting consent or licensing data for use in scraping data. Defining their research as scientific will allow them to escape these constraints. I could not be a greater supporter of the wonderful scientific research that is carried out in this country, but I want the Bill to ensure that it really is scientific research and not AI development camouflaged as scientific research.
The line between product development and scientific research is often blurred. Many developers posit efforts to increase model capabilities, efficiency, or indeed the study of their risks, as scientific research. The balance has to be struck between allowing this country to become an AI superpower and exploiting its data subjects. I contend that this amendment will go far to allay public fears of the abuse and use of their data to further the profits and goals of huge AI companies, most of which are based in the United States.
Noble Lords have only to look at the outrage last year at Meta’s use of Instagram users’ data without their consent to train the datasets for its new Llama AI model to understand the levels of concern. There were complaints to regulators, and the ICO posted that Meta
“responded to our request to pause and review plans to use Facebook and Instagram user data to train generative AI”.
However, so far, there has been no official change to Meta’s privacy policy that would legally bind it to stop processing data without consent for the development of its AI technologies, and the ICO has not issued a binding order to stop Meta’s plans to scrape users’ data to train its AI systems. Meanwhile, Meta has resumed reusing subjects’ data without their consent.
I thank the Minister for meeting me and talking through Amendment 14. I understand his concerns that, at a public interest threshold, the definition of scientific research will create a heavy burden on researchers, but I think it is worth the risk in the name of safety. Some noble Lords are concerned about the difficulty of defining “public interest”. However, the ICO has very clear guidelines about what public interest consists of. It states that
“you should broadly interpret public interest in the research context to include any clear and positive public benefit likely to arise from that research”.
It continues:
“The public interest covers a wide range of values and principles about the public good, or what is in society’s best interests. In making the case that your research is in the public interest, it is not enough to point to your own private interests”.
The guidance even includes further examples of research in the public interest, such as
“the advancement of academic knowledge in a given field … the preservation of art, culture and knowledge for the enrichment of society … or … the provision of more efficient or more effective products and services for the public”.
This guidance is already being applied in the Bill to sensitive data and public health data. I contend that if these carefully thought-through guidelines are good enough for health data, they should be good enough for all scientific data.
This view is supported in the EU, where
“the special data protection regime for scientific research is understood to apply where … the research is carried out with the aim of growing society’s collective knowledge and wellbeing, as opposed to serving primarily one or several private interests.”
The Minister will tell the House that the data exempted to be used for scientific research is well protected—that it has both the lawfulness test, as set out in the UK GDPR, and a reasonableness test. I am concerned that the reasonableness test in this Bill references
“processing for the purposes of any research that can reasonably be described as scientific, whether publicly or privately funded and whether carried out as a commercial or non-commercial activity”.
Normally, a reasonableness test requires an expert in the context of that research to decide whether it is reasonable to consider it scientific. However, in this Bill, “reasonable” just means that an ordinary person in the street can decide whether the research is reasonable to be considered scientific. This must be a broadening of the threshold of the definition.
It seems “reasonable” in the current climate to ask the Government to include a public interest test before giving the AI companies extensive scope to reuse our data, without getting renewed consent, on the pretext that the work is for scientific research. In the light of possible deregulation of the sector by the new regime in America, it is beholden on this country to ensure that our scientific research is dynamic, but safe. If the Government can bring this reassurance then for millions of people in this country they will increase trust in Britain’s AI revolution. I beg to move.
My Lords, I support my noble friend Lord Colville. He has made an excellent argument, and I ask noble Lords on the Government Benches to think about it very carefully. If it is good enough for health data, it is good enough for the rest of science. In the interest of time, I will give an example of one of the issues, rather than repeat the excellent argument made by my noble friend.
In Committee, I asked the Government three times whether the cover of scientific research could be used, for example, to market-test ways to hack human responses to dopamine in order to keep children online. In the Minister’s letter, written during Committee, she could not say that the A/B testing of millions of children to make services more sticky—that is, more addictive—would not be considered scientific, but rather that the regulator, the ICO, could decide on a case-by-case basis. That is not good enough.
There is no greater argument for my noble friend Lord Colville’s amendment than the fact that the Government are unable to say if hacking children’s attention for commercial gain is scientific or not. We will come to children and child protection in the Bill in the next group, but it is alarming that the Government feel able to put in writing that this is an open question. That is not what Labour believed in opposition, and it is beyond disappointing that, now in government, Labour has forgotten what it then believed. I will be following my noble friend through the Lobby.
(1 year ago)
Grand CommitteeIf “indispensable” and purely “benefit” are the same, why was the change made on Report in the Commons?
I was really interested in the introduction of the word “unknown”. The noble Lord, Lord Lansley, set out all the different stages and interactions. Does it not incentivise the companies to call back information to this very last stage, and the whole need-for-speed issue then comes into play?
(1 year, 8 months ago)
Lords ChamberMy Lords, I had the great privilege of serving as a member of this House’s Fraud Act 2006 and Digital Fraud Committee under the excellent chairing of the noble Baroness, Lady Morgan. She has already told us of the ghastly effects that fraud has on individuals and indeed its adverse effects on businesses. We heard really dramatic statistics, such as when Action Fraud told us that 80% of fraud is cyber enabled.
Many of us here will have been victims of fraud—I have been a victim—or know people who have been victims of fraud. I was therefore very pleased when the Government introduced the fraudulent advertising provisions into the Bill, which will go some way to reducing the prevalence of online fraud. It seems to me that it requires special attention, which is what these amendments should do.
We heard in our inquiry about the problems that category 1 companies had in taking down fraudulent advertisements quickly. Philip Milton, the public policy manager at Meta, told us that it takes between 24 and 48 hours to review possibly harmful content after it has been flagged to the company. He recognised that, due to the deceptive nature of fraudulent advertising, Meta’s systems do not always recognise that advertising is fraudulent and, therefore, take-down rates would be variable. That is one of the most sophisticated tech platforms—if it has difficulties, just imagine the difficulty that other companies have in both recognising and taking down fraudulent advertising.
Again and again, the Bill recognises the difficulties that platforms have in systematising the protections provided in the Bill. Fraud has an ever-changing nature and is massively increasing—particularly so for fraudulent advertising. It is absolutely essential that the highest possible levels of transparency are placed upon the tech companies to report their response to fraudulent advertising. Both Ofcom and users need to be assured that not only do the companies have the most effective reporting systems but, just as importantly, they have the most effective transparency to check how well they are performing.
To do this, the obligations on platforms must go beyond the transparency reporting requirements in the Bill. These amendments would ensure that they include obligations to provide information on incidence of fraud advertising, in line with other types of priority illegal content. These increased obligations are part of checking the effectiveness of the Bill when it comes to being implemented.
The noble Baroness, Lady Stowell, told us on the fifth day of Committee, when taking about the risk-assessment amendments she had tabled:
“They are about ensuring transparency to give all users confidence”.—[Official Report, 9/5/23; col. 1755.]
Across the Bill, noble Lords have repeatedly stated that there needs to be a range of ways to judge how effectively the protections provided are working. I suggest to noble Lords that these amendments are important attempts to help make the Bill more accountable and provide the data to future-proof the harms it is trying to deal with. As we said in the committee report:
“Without sufficient futureproofing, technology will most likely continue to create new opportunities for fraudsters to target victims”.
I ask the Minister to at least look at some of these amendments favourably.
My Lords, I shall say very briefly in support of these amendments that in 2017, the 5Rights Foundation, of which I am the chair, published the Digital Childhood report, which in a way was the thing that put the organisation on the map. The report looked at the evolving capacity of children through childhood, what technology they were using, what happened to them and what the impact was. We are about to release the report again, in an updated version, and one of the things that is most striking is the introduction of fraud into children’s lives. At the point at which they are evolving into autonomous people, when they want to buy presents for their friends and parents on their own, they are experiencing what the noble Baroness, Lady Morgan, expressed as embarrassment, loss of trust and a sense of deserting confidence—I think that is probably the phrase. So I just want to put on the record that this is a problem for children also.
(1 year, 8 months ago)
Lords ChamberI support Amendment 44. I am pleased that, as part of the new triple shield, the Government have introduced Clause 12 on “User empowerment duties”, which allow users to protect themselves, not just from abusive posts from other users but from whole areas of content. In the Communications and Digital Committee’s inquiry, we had plenty of evidence from organisations representing minorities and people with special characteristics who are unable adequately to protect themselves from the hate they receive online. I am glad that subsections (10) to (12) recognise specific content and users with special characteristics who are targets of abuse and need to be able to protect themselves, but subsection (3) requests that these features should be
“designed to effectively … reduce the likelihood of the user encountering content”
they want to avoid. I am concerned that “effectively” will be interpreted subjectively by platforms in scope and that each will interpret it differently.
At the moment, it will not be possible for Ofcom to assess how thoroughly the platforms have been providing these empowerment tools of protection for users. If the features are to work, there must be an overview of how effective they are being and how well they are working. When the former Secretary of State, Michelle Donelan, was asked about this, she said that there was nothing in this clause to pin an assessment on. It seems to me that the lists in Clause 12 create plenty of criteria on which to hang an assessment.
The new duties in Clause 12 provide for control tools for users against very specific content that is abusive or incites hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation. However, this list is not exhaustive. There will inevitably be areas of content for which users have not been given blocking tools, including pornography, violent material and other material that is subject to control in the offline world.
Not only will the present list for such tools need to be assessed for its thoroughness in allowing users to protect themselves from specific harms, but surely the types of harm from which they need to protect themselves will change over time. Ofcom will need regularly to assess where these harms are and make sure that service providers regularly update their content-blocking tools. Without such an assessment, it will be hard for Ofcom and civil society to understand what the upcoming concerns are with the tools.
The amendment would provide a transparency obligation, which would demand that service providers inform users of the risks present on the platform. Surely this is crucial when users are deciding what to protect themselves from.
The assessment should also look for unintended restrictions on freedom of expression created by the new tools. If the tools are overprotective, they could surely create a bubble and limit users’ access to information that they might find useful. For example, the user might want to block material about eating disorders, but the algorithm might interpret that to mean limiting the user’s access to content on healthy lifestyles or nutrition content. We are also told that the algorithms do not understand irony and humour. When the filters are used to stop content that is abusive or incites hatred on the basis of users’ particular characteristics, they might also remove artistic, humorous or satirical content.
Repeatedly, we are told that the internet creates echo chambers, where users read only like-minded opinions. These bubbles can create an atmosphere where freedom of expression is severely limited and democracy suffers. A freedom of expression element to the assessment would also, in these circumstances, be critical. We are told that the tech platforms often do not know what their algorithms do and, not surprisingly, they often evolve beyond their original intentions. Assessments on the tools demanded by Clause 12 need to be carefully investigated to ensure that they are keeping up to date with the trends of abuse on the internet but also for the unintended consequences they might create, curbing freedom of expression.
Throughout the Bill, there is a balancing act between freedom of expression and protection from abuse. The user empowerment tools are potentially very powerful, and neither the service providers, the regulators nor the Government know what their effects will be. It is beholden upon the Government to introduce an assessment to check regularly how the user empowerment duties are working; otherwise, how can they be updated, and how can Ofcom discover what content is being unintentionally controlled? I urge the Minister, in the name of common sense, to ensure that these powerful tools unleashed by the Bill will not be misused or become outdated in a fast-changing digital world.
My Lords, I thank the noble Lord, Lord Moylan, for his words—I thought I was experiencing time travel there—and am sympathetic to many of the issues that he has raised, although I think that some of the other amendments in the group tackle those issues in a slightly different way.
I support Amendments 44 and 158 in the name of the right reverend Prelate the Bishop of Oxford. Requiring a post-rollout assessment to ensure that the triple shield acts as we are told it will seems to be a classic part of any regulatory regime that is fit for purpose: it needs to assess whether the system is indeed working. The triple shield is an entirely new concept, and none of the burgeoning regulatory systems around the world is taking this approach, so I hope that both the Government and Ofcom welcome this very targeted and important addition to the Bill.
I will also say a few words about Amendments 154 and 218. It seems to me that, in moving away from legal but harmful—which as a member of the pre-legislative committee I supported, under certain conditionality that has not been met, but none the less I did support it—not enough time and thought have been given to the implications of that. I do not understand, and would be grateful to the Minister if he could help me understand, how Ofcom is to determine whether a company has met its own terms and conditions—and by any means, not only by the means of a risk assessment.
I want to make a point that the noble Baroness, Lady Healy, made the other day—but I want to make it again. Taking legal but harmful out and having no assessment of whether a company has met its general safety duties leaves the child safety duties as an island. They used to be something that was added on to a general system of safety; now they are the first and only port of call. Again, because of the way that legal but harmful fell out of the Bill, I am not sure whether we have totally understood how the child risk assessments sit without a generally cleaned up or risk-assessed digital environment.
Finally, I will speak in support of Amendment 160, which would have Ofcom say what “adequate and appropriate” terms are. To a large degree, that is my approach to the problem that the noble Lord, Lord Moylan, spoke about: let Parliament and the regulator determine what we want to see—as was said on the data protection system, that is how it is—and let us have minimum standards that we can rightly expect, based on UK law, as the noble Lord suggested.
I am not against the triple shield per se, but it radically replaced an entire regime of assessment, enforcement and review. I think that some of the provisions in this group really beg the Government’s attention, in order to make sure that there are no gaping holes in the regime.