(5 days, 13 hours ago)
Grand CommitteeI shall speak very briefly, because the previous three speakers have covered the ground extremely well and made some extremely powerful arguments.
The noble Baroness, Lady Kidron, put her finger on it. The default position of departments such as the DfE, if they recognise there is a problem, is to issue guidance. Schools are drowning in guidance. If you talk to any headmaster or headmistress or the staff in charge of technology and trying to keep on top of it, they are drowning in guidance. They are basically flying blind when being asked to take some quite major decisions, whether it is about purchasing or the safeguards around usage or about measuring the effectiveness of some of the educational technology skills that are being acquired.
There is a significant difference between guidance and a clear and concrete code. We were talking the other day, on another group, about the need to have guardrails, boundaries and clarity. We need clarity for schools and for the educational technology companies themselves to know precisely what they can and cannot do. We come back again to the issue of the necessity of measuring outcomes, not just processes and inputs, because they are constantly changing. It is very important for the companies themselves to have clear guardrails.
The research to which the noble Baroness, Lady Kidron, referred, which is being done by a variety of organisations, found problems in the areas that we are talking about in this country, the United States, Iceland, Denmark, Sweden, the Netherlands, Germany and France—and that is just scratching the surface. Things are moving very quickly and AI is accelerating that even more. With a code you are drawing a line in the sand and declaring very clearly what you expect and do not expect, what is permissible and not permissible. Guidance is simply not sufficient.
My Lords, I make a brief intervention. I am not against these amendments —they are very useful in the context of the Bill. However, I am reflecting on the fact that, when we drafted GDPR, we took a six-year process and failed in the course of doing so to really accommodate AI, which keeps popping up every so often in this Bill. Every part of every amendment seems to have a new subsection referring to automative decisions or to AI generally.
Obviously, we are moving on to have legislation in due course on AI and I am sure that a number of pieces of legislation, including no doubt this one, will be able to be used as part of our overall package when we deal with the regulation of AI. However, although it is true that the UK GDPR gives, in theory, a higher standard of protection for children, it is important to consider that, in the context of AI, the protections that we need to have are going to have to be much greater—we know that. But if there is going to be a code of practice for children and educational areas, we need also to consider vulnerable and disabled people and other categories of people who are equally entitled to have, and particularly with regard to the AI elements need to have, some help. That is going to be very difficult. Most adults whom I know know less about AI than do children approaching the age of 18, who are much more knowledgeable. They are also more knowledgeable of the restrictions that will have to be put in place than are adults, who appear to be completely at sea and not even understanding what AI is about.
I make a precautionary point. We should be very careful, while we have AI dotted all the way through this, that when we specify a particular element—in this case, for children—we must be aware of the need to have protection in place for other groups, particularly in the context of this Bill and, indeed, future legislation.
(1 week ago)
Grand CommitteeMy Lords, I was not going to rise at all for the moment because there are other amendments coming later that are of interest. I declare my rather unusual interest: I was one of the architects of the GDPR in Brussels.
I rise to support Amendment 211A in the name of my noble friend Lord Holmes because here we are referring to AI. I know that other remarks have now been passed on this matter, which we will come to later, but it seems to me—this has come straight into my mind—that, when the preparation of the data legislation and the GDPR was being undertaken, we really did fail at that stage to accommodate the vast and important areas that AI brings to the party, as it were. We will fail again, I suspect, if we are not careful, in this piece of legislation. AI is with us now and moving at an enormous pace—faster than any legislator can ever manage to keep up with in order to control it and to make sure that there are sufficient protections in place for both the misuse of this technology and the way it may develop. So I support this amendment, particularly in relation to the trading or use of likenesses and the algorithmic effects that come about.
We will deal with that matter later, but I hope that the Minister will touch on this, particularly having heard the remarks of my noble friend Lord Holmes—and, indeed, the remarks of my noble friend Lady Harding a moment ago—because AI is missing. It was missing in the GDPR to a large extent. It is in the European Union’s new approach and its regulations on AI, but the EU has already shown that it has enormous difficulties in trying to offer, at one stage, control as well as redress and the proper involvement of human beings and individual citizens.
My Lords, I rise briefly to support my noble friend Lady Kidron on Amendment 137. The final comments from the noble and learned Lord, Lord Thomas, in our debate on the previous group were very apposite. We are dealing with a rapidly evolving and complex landscape, which AI is driving at warp speed. It seems absolutely fundamental that, given the panoply of different responsibilities and the level of detail that the different regulators are being asked to cover, there is on the face of what they have to do with children absolute clarity in terms of a code of practice, a code of conduct, a description of the types of outcomes that will be acceptable and a description of the types of outcomes that will be not only unacceptable but illegal. The clearer that is in the Bill, the more it will do something to future-proof the direction in which regulators will have to travel. If we are clear about what the outcomes need to be in terms of the welfare, well-being and mental health of children, that will give us some guidelines to work within as the world evolves so quickly.