(9 months, 3 weeks ago)
Grand CommitteeI urge the noble Baroness to stay for the debate on the next group of amendments, in which we will talk about parliamentary accountability. I think she will find that the committee I am proposing is perhaps not quite as modest as she has just described it.
(1 year, 6 months ago)
Lords ChamberI ask the Committee to have a level of imagination here because I have been asked to read the speech of the noble Viscount, Lord Colville—
I do not know who advised the noble Baroness—and forgive me for getting up and getting all former Leader on her—but this is a practice that we seem to have adopted in the last couple of years and that I find very odd. It is perfectly proper for the noble Baroness to deploy the noble Viscount’s arguments, but to read his speech is completely in contravention of our guidance.
I beg the pardon of the Committee. I asked about it and was misinformed; I will do as the noble Baroness says.
The noble Viscount, Lord Colville, is unable to be with us. He put his name to Amendments 273, 275, 277 and 280. His concern is that the Bill sets the threshold for illegality too low and that in spite of the direction provided by Clause 170, the standards for determining illegality are too vague.
I will make a couple of points on that thought. Clause 170(6) directs that a provider must have
“reasonable grounds to infer that all elements necessary for the commission of the offence, including mental elements, are present or satisfied”,
but that does not mean that the platform has to be certain that the content is illegal before it takes it down. This is concerning when you take it in combination with what or who will make judgments on illegality.
If a human moderator makes the decision, it will depend on the resources and time available to them as to how much information they gather in order to make that judgment. Unlike in a court case, when a wide range of information and context can be gathered, when it comes to decisions about content online, these resources are very rarely available to human moderators, who have a vast amount of content to get through.
If an automated system makes the judgment, it is very well established that algorithms are not good at context—the Communications and Digital Committee took evidence on this repeatedly when I was on it. AI simply uses the information available in the content itself to make a decision, which can lead to significant missteps. Clause 170(3) provides the requirement for the decision-makers to judge whether there is a defence for the content. In the context of algorithms, it is very unclear how they will come to such a judgment from the content itself.
I understand that these are probing amendments, but I think the concern is that the vagueness of the definition will lead to too much content being taken down. This concern was supported by Parliament’s Joint Committee on Human Rights, which wrote to the former Culture Secretary, Nadine Dorries, on that matter. I apologise again.