Automated and Electric Vehicles Bill (Third sitting) Debate
Full Debate: Read Full DebateOliver Letwin
Main Page: Oliver Letwin (Independent - West Dorset)Department Debates - View all Oliver Letwin's debates with the Department for Transport
(7 years, 1 month ago)
Public Bill CommitteesBefore I launch into the subject, Sir Edward, may I seek your guidance on a question of procedure? I want to make some points that I wish to bring to the Minister’s attention. They relate to the amendments, but more precisely to the clause. Shall I make those points in the stand part debate or now?
No—if they relate to the amendments, make the points now. If the right hon. Gentleman speaks out of order I will call him to order.
Thank you, Sir Edward.
As the Minister knows, two specific issues in the Bill concern me and led me to seek to be part of the Committee. One relates to the question of the strict liability of insurers when the vehicle is operating automatically, which of course relates to the software and its safety—the subject of this group of amendments. I have suggested to the Minister two possible approaches to resolving that problem, which was exposed in our evidence sessions. One of those relates to clause 1(1) and would probably require a somewhat different amendment from those that have been tabled, albeit broadly of the same kind. Let me first explain the problem and then try to suggest the solution.
We established clearly from the insurance industry representatives we questioned that, as the Bill is currently drafted, strict liability will attach to the car rather than to an individual, which is an entirely new phenomenon in insurance law. Let us suppose that there is not a fundamental legal problem with strict liability attaching to the insurer of a car. I make that assumption, although I do not necessarily think that it is a safe one; that may be explored further in the other place by lawyers with much deeper acquaintance with insurance law than I claim to have.
Supposing that that is a feasible arrangement, we then face the question: at what point should that strict liability clock in? That would not be a material question if the machine was never driven by a human being but was driven only by the machine itself. As the hon. Member for Eltham pointed out, that was raised during the evidence session by the rather enterprising group that will create service operations on London’s streets out of what are, in effect, level 5 vehicles way ahead of the schedule that other witnesses suggested would apply. Such vehicles clearly will never have a human being driving them; they will be automated objects that human beings will get into. As it is currently drafted, the Bill will therefore create a strict liability for the insurers. On the happy assumption that that will work legally, insurers will insure those vehicles, they will discover whether that is a very expensive proposition and that will get built into the service price. I am not worried about that from a legislative point of view.
However, I think that the Minister would agree, as all our witnesses seemed to, that it is extremely likely that, in parallel with that rapid roll-out of highly automated level 5 items, for perhaps many millions of motorists there will be a gradual progression—not necessarily strictly demarcated as level 3, level 4 and so on—from vehicles that are largely driven by a driver but somewhat assisted by the machine, to vehicles that are driven by the machine under more and more circumstances but are sometimes driven by the driver.
I certainly do not think that we should legislate on the assumption that we know what the future will look like, but it is highly likely that there will be a stage at which there are vehicles that, for example, are well designed to operate on motorways on an automated basis. The nation may benefit hugely from them operating in that way, because it is safer and allows much shorter distances between vehicles and therefore much more intensive use of motorways, which diminishes capital investment in the motorway system, improves safety and prevents the environmental damage that building more motorways would occasion, so that may well in fact become compulsory at some point. However, those very same vehicles may be ill-designed to deal with country roads, city roads or other kinds of road, so they may well have a function that enables them to be switched back and forth between automated driving and being driven by the driver.
We heard rather different things from witnesses about that switchover. To tell the truth, I think that that is because nobody really knows how it is going to operate. The history of technology is littered with prophecies from experts about how future technologies will operate that have proved to be false, so the Committee would be wise to assume that we do not know, and will not know when legislating, how exactly the switchover between driver and automated vehicle will occur.
Mr Wong suggested in an evidence session that the vehicle itself will offer up to the driver the opportunity to switch over to automation in circumstances in which the vehicle is sufficiently intelligent to know that it is safe for it to take over the driving, and that it will never otherwise offer up that opportunity. It is perfectly sensible that if the vehicle offers itself to the driver to take over operation, and if the driver allows it to take over operation, the vehicle becomes the driver, and the strict liability of the insurer attaches to the vehicle and not any longer to the person. That would be fine.
However, if, as some other witnesses seemed to think was the case, it is the driver who will, at least in some circumstances, make the decision of whether to switch over to automated use, this becomes a highly material question: has the driver made that decision in a reasonable and sensible fashion? The reason is that if the driver has not made the decision in a sensible and reasonable fashion, and if the insurer of the vehicle is nevertheless bound to have strict liability for the vehicle taking over the action, insurers could be faced with enormous bills in circumstances in which what they were actually doing was facing a bad decision by a person whom they had never insured; they had insured the vehicle and not the person. That is the problem we need to address, which brings me to the question of clause 1(1).
I am delighted that my right hon. Friend has looked into these matters with typical assiduity. I am also delighted to serve under your chairmanship, Sir Edward. I briefly say that, as I have risen for the first time. I know that your sagacity in the Chair will match the warmth of your friendship and the generosity of your home, which you have offered me just this week at a dinner party. Anyway, let us leave that to one side.
I like dancing on the head of pins—I think it is an appealing thing to do—but we must be careful to avoid it in this Committee, because time does not permit it, many hon. Members want to contribute and there is a slight risk from doing so in this case. I will make this argument as quickly as I can. The key issue about an event that took place while the vehicle was in autonomous mode is not the point at which it went into autonomous mode, but the events at the point at which the incident occurred. If we can be very clear that the vehicle was being driven autonomously at the time of an incident or accident, that becomes the salient issue, rather than what might have happened five minutes or half an hour before, when the driver switched it to autonomous mode, because of course the circumstances of its being autonomous will then become absolutely clear, and at that point the liability is not in question.
I take the point that whether the vehicle should have been in autonomous mode may be material and I shall explore that more when I respond to the debate, but I think that it is what happens at the point of the accident that is of greatest concern. I just put that to my right hon. Friend the Member for West Dorset for further consideration.
I have considered that and I think that is the assumption. My right hon. Friend has well exposed the logic that underlies the current drafting, and it is in error, in my view, because although of course the material moment is the moment of the hypothetical accident, the cause of the accident is the material question from the point of view of the operation of our insurance system, and if the cause of the accident was a bad decision by the person, there is an illogic that will eventually undo all the good we are trying to do if nevertheless the insurer of the vehicle has strict liability. The fact that it may have been five, 20 or 55 minutes before the accident that the person handed over control to the vehicle is irrelevant if the basis on which the person handed over control was wrong and the person made the wrong decision. It seems to me that the question we need to address is this: is it possible that the person should have made such a wrong decision, or have we eliminated that possibility? That is what I want to get on to, because that is where clause 1(1)(b) needs to have a (c).
Is it not highly likely that this sophisticated vehicle will prevent the driver from seeking to put the vehicle in automated mode if it is unsafe to do so? It will reject the request.
I am grateful to my right hon. Friend for asking that question because it leads me to exactly the point I want to raise in relation to 1(1)(a), (b), and, as I think it may need to be, (c).
I will give way, of course, in a moment.
Such a course of action is fine and would solve the problem that I have advanced, because the Minister or Secretary of State, or an expert acting on his or her behalf, would have verified in advance that the machine was capable of taking over and would take over only under safe circumstances. Before I give way to the Minister, I want to point out that that is using the law to limit the technology, and the history of the approach to that in our country’s legislation has been very bad. I will not go into all the history, but I am happy to write the Minister a memorandum about it if he wants. I once wrote an article about this. There is a very long history of Parliament trying to prejudge the technology, legislating on the assumption that it will be only that technology, mandating therefore only that technology, and discovering that there is not any of it and that people elsewhere are manufacturing things that we do not get because they do not fit our legal system. It is not the route I recommend, and I will come back to that when we get to clause 2. It is a possible route, however, and one that the Minister should at least consider.
I will speak more about my right hon. Friend’s last point when I respond to the debate as a whole, because of course it relates closely to the shadow Minister’s point about how far we define what we do now. The Bill is an attempt to thread a course between creating sufficient certainty to establish a framework to allow further development and, on the other hand, doing exactly what my right hon. Friend has mentioned in trying to predict a future that may not come to pass. He is right to raise that and I will deal with it in greater detail.
On the specifics of his point about liability, I draw his attention to clause 3(2), which we will debate later. You will not let me debate it now for that reason, Sir Edward, but clause 3(2) specifically talks about the subject that my right hon. Friend describes, because it draws attention to the possibility of an accident being
“wholly due to the person’s negligence in allowing the vehicle to begin driving itself when it was not appropriate to do so.”
That is very much what my right hon. Friend speaks about, and it is why we put it in the Bill. He makes a separate point—a good one—about technology that kicks in of its own accord because the technology, the software, determines that it is better at that point for the vehicle to be driven autonomously. We will explore that in greater detail as we consider the legislation. I simply draw his attention at this stage to clause 3(2).
I recognise that I am treading on your indulgence, Sir Edward, but, as the Minister has mentioned clause 3(2), I will briefly point out, although no doubt we will discuss this later, why I do not think that it solves the problem. It is possible that it is susceptible to redrafting so that it will, but it is ill drafted if the intention is to solve the problem I have raised. In the first place, it says, “wholly”, in that it is
“wholly due to the person’s negligence”.
That is an almost impossible thing to establish. As currently drafted, it does almost no heavy lifting at all. I think I know why a parliamentary draftsman has nevertheless inserted the word “wholly”, because, like the Minister, I have had quite a long experience of dealing with parliamentary draftsmen on numerous Bills. I know that they think through carefully the question of what happens if we do not put in a word such as “wholly” under these circumstances.
Order. The right hon. Gentleman is gradually wandering from the strict road that relates to the amendment. He can always come back on clause stand part, and I have allowed him a lot of indulgence so far. I know he will return to the amendments.
I am grateful, Chair. I will leave it at that so far as clause 3(2) is concerned, but I will no doubt come back to it.
Finally, if it were the intention of the Minister to add to clause 1(1), rather than to do something to clause 2 or clause 3, which we will come to later, it would be important to establish whether the view taken by Mr Wong—that these machines will always be designed in such a way that they decide on a safe basis whether to take over—is a consensual view across the industry in every country or a happenstance view of some particular technologist.
Again, the right hon. Gentleman is touching on the area of ethics—it is covered in the excellent document written by the German Transport Ministry—which is about freedom of choice and the question of whether the individual driving the car should succumb to the superior knowledge of the software that has been put in the vehicle and have control of the vehicle taken away from them in certain circumstances. We have not discussed that issue, but it could arise as a consequence of the Bill. That is why I suggest we look carefully at the software. There is a major question about the freedom of choice of an individual driving their car if we allow the technology to take decisions away from the driver.
Yes, I agree with the hon. Gentleman. Sharing his anorak tendencies, I too have been interested in the German case. In fact, I spent some while talking to German officials and motor manufacturers about the issue. Actually, I think there is a serious problem—this is the final point I want to raise—with clause 1(1)(b), which relates specifically to the questions of ethics that he raised. I want to draw the Minister’s attention to the word in clause 1(1)(b), “safely”. [Interruption.]
Undoubtedly so—it is No. 10 calling the Minister to higher things, yet they may not be of such great significance to our future as the Bill.
In clause 1(1)(b), the Secretary of State is asked to opine on whether the vehicle that is being approved and put on the list is capable of “safely driving”. An awful lot will hang on that word “safely” in what will probably be a rich jurisprudence over many decades. The hon. Member for Eltham is rightly drawing our attention to the fact that “safely” in this context could mean something technical—is the machine technically sophisticated enough to deal with circumstances—or it could mean something much deeper. It could mean the ethics and applied intelligence built into the machine so as to produce views or choices that accord with the social preferences of Parliament about, in trying to minimise the effect of an accident, who is to be sacrificed under circumstances where two different groups of persons could be sacrificed. Alternatively, it could mean any other set of very complicated ethical choices.
I of course bow to the Department’s legal advisers, parliamentary counsel and any external counsel, but my own hunch is that there is not enough jurisprudence available to guide us on whether “safely” will bear that amount of weight. I wonder whether the Minister should consider at least giving the Secretary of State the duty in due course to consider not just whether the machinery is capable of driving “safely”, but whether it is capable of driving—I do not know quite what words parliamentary counsel would want to choose—ethically or properly or in a socially desirable way. That is an odd kind of question to ask about a machine, I grant, but these are odd machines we are considering.
The hon. Member for Eltham is on to a good thing with amendment 8, even if he does not press it to a vote, because he raises an issue we will have to address. What we all do not want to get to—I think the Committee is united in this—is a sort of red flag situation where machines have been authorised because they have a large amount of technological wizardry in them that makes them highly sophisticated, but they make choices that any sane Parliament or Government, or indeed public, would regard as wholly morally objectionable, socially undesirable or both.
We need to think very hard about ensuring that the legislation at least lets our successors—whoever may be Secretary of State at the time—consider that range of issues when approving something. Otherwise, the Secretary of State will say, “Oh well, this is technically okay, but I don’t like the look of what it is going to do by way of the kinds of decisions it is going to make,” and some adviser will tell that Secretary of State, “Sorry, Secretary of State, it is ultra vires for you to refuse this vehicle on the list just because it is going to mow down young people in preference to old people”—or something—“because you are only allowed to determine safety, not ethics.” It is quite important that we get that precise wording right. I am grateful to you for your tolerance, Sir Edward.
I want to pick up the points made by the right hon. Gentleman. I was trying to think of parallels to try to understand this and imagine what it might be like in five or 10 years from now, and I guess I was likening it to the introduction of, say, cruise control and how that works with the insurance industry. If a driver instigates cruise control in an urban area and sets it at a speed that is in excess of the limit on that roadway, where would the responsibility and liability fall? The industry and technologies are improving at a pace. As was said in the Chamber on Second Reading, it is difficult to imagine where we will be, but I imagine that essentially the liability should be with the driver. If the driver has introduced the cruise control or automated driving system—in whatever form that may take—that is their choice just as it is their choice to manoeuvre from one lane to another today, which might ultimately result in an accident.
Perhaps I am not appreciating the fine nuance of the debate, but I would have assumed that, ultimately, the liability has to be with the driver. In the event of an accident, the telematics would be able to provide data to the insurance industry to prove things one way or another.
My right hon. Friend mentions the core requirement of safety. What does he understand “safety” or “safely” to mean in this context, and what advice has he received about whether it can bear the burden of distinguishing between an ethically proper set of choices by artificial intelligence and an ethically improper set of choices?
That is a very big question indeed. It is the one that, in a sense, was first raised by the hon. Member for Eltham in the evidence session and on Second Reading, when he painted the picture of a scenario where a human being faces an ethical dilemma while driving. I will paraphrase the example for the sake of brevity: a child runs into the road and the driver has the choice of hitting the child or swerving and possibly causing a more catastrophic accident. That is a momentary judgment that any driver makes. In the end, it is a practical and ethical judgment, is it not? We could have a very long debate. My hon. Friend on my right, the Whip, may be my former Parliamentary Private Secretary, but he will not be entirely indulgent of me if I engaged in that very long debate, because of course one could extend it—
Let me try to answer the hon. Gentleman and my right hon. Friend the Member for West Dorset in two ways. First, I draw attention to something that Mr Wong said in evidence on Tuesday:
“May I point something out? I mentioned autonomous emergency braking. It has been demonstrated that the technology is improving all the time. Previously, autonomous emergency braking worked perfectly at 30 mph, which is urban speed, but it is becoming increasingly sophisticated. AEB can work well even at 50 mph. It would not surprise me if the technology improved in years to come”.––[Official Report, Automated and Electric Vehicles Public Bill Committee, 31 October 2017; c. 44, Q103.]
The technology is improving so rapidly and dramatically that in the scenario painted by the hon. Member for Eltham, an automated vehicle is likely to change lanes and—as in Mr Wong’s example—brake to ensure safety.
The representatives of the insurance industry stated in their evidence that the industry believes there will be fewer accidents, because the judgment of an autonomous vehicle will outpace that of a human being. I use the word “judgment” for technology with caution, as my right hon. Friend the Member for West Dorset used the word “ethics” with caution, but the judgment of the software driving the automated vehicle will be more acute and, in the end, safer. These machines are likely to be less prone to error than human beings, so there will be fewer accidents; the vehicles will be safer and therefore easier and cheaper to insure. We heard that point repeatedly in the evidence session. We can be confident that that is the direction of travel—I apologise for using that rather hackneyed phrase in this context—but we cannot be sure how quickly we will get there or exactly what it will look like. I would be a very bold man if I made such a prediction.
I, too, listened to Mr Wong and have re-read the part of his evidence that the Minister quotes from, but it is wholly irrelevant to our point. I thought it was extremely instructive that Mr Wong, who is clearly a very great technical expert, completely failed to understand the issue. The Germans have begun to understand it, but the Bill does not genuinely or seriously address it.
The Bill is drafted as if artificial intelligence were the same kind of thing as speed control. It is not, and that is a very important error underlying the Bill’s drafting. Speed control is a technical matter, and we could go much further with technical development and still be in the technical arena in which safety is the only question, because the ethical judgments are made exclusively by the human drivers. With artificial intelligence, as the hon. Member for Eltham rightly says, we are moving into a terrain in which the machine will make the kind of decisions that Parliaments and human beings make. These are questions not of safety, but of judgment about the right outcome under difficult circumstances.
I ask the Minister to go back to his Department and talk to its lawyers about whether jurisprudence will deliver to him or his successors the ability to refuse approval to a piece of artificial intelligence that, either directly or through its learning processes, will or could have the effect of producing totally dysfunctional anti-utilitarian results by making judgments that are technically perfectly safe but that just happen to take the view that, for example, wiping out a group of three-year-old schoolchildren is better than wiping out a 98-year-old crossing the road. That is a very difficult judgment for a human being to make, but it is the kind of judgment that Parliaments have to make, and I think that at the moment it is very clear in the Bill that it would not permit a Secretary of State to prevent type approval for a machine that was designed in such a way that there could be those very bizarre and undesirable results, and I am sure that that is not what the Department or the Minister wants to achieve.
Let us not overestimate how far this Bill—I am being very particular about my words—intends to go. This Bill is about ensuring that victims of collisions caused by autonomous vehicles get quick, easy access to insurance compensation in line with conventional processes. What we heard in the evidence and what we debated when the Bill was in its earlier incarnation was that it was important for the insurance industry, and therefore for the further development of this technology, that we were clear about that—there would be no difference, from the perspective of the person who owned the vehicle, in how they went about making a claim.
There is a much bigger debate, which will clearly have to be dealt with in legislation, in regulations, in type approval—in a whole range of other things—about some of the other matters that the hon. Member for Eltham and my right hon. Friend the Member for West Dorset have raised. If they are both right that we will get to a point at which the machine makes what is in effect an ethical judgment—I am trying to use words very carefully; it is very obviously the machine making ethical judgments, but I do appreciate the strangeness of it—clearly that will have to be taken into account at a future point in the legislative process. I do not think this Bill is the place to do it; I just do not think it can do it, because we do not yet know enough.
We are back to my first point, about the line we are trying to tread between what we can do now with certainty and what we might do in the future in a world in which we can as yet only imagine what might occur. If my right hon. Friend will permit me to say so, perhaps the Hegelian synthesis, where we might meet between what appears to be my thesis and his antithesis, is that this Bill is a starting point—a first step along, as I have said, a long road.
I am very grateful to my right hon. Friend for giving way. I entirely accept that this Bill is just the starting point, but I think he is missing the point that I am trying to make about what starting with this language—with just the word “safely” and no reference to wider considerations—will do to his successors.
There is no point in having the Secretary of State empowered to make a list unless Secretaries of State are actually going to make lists. There is no point in empowering them to make lists of automated vehicles unless those lists are going to relate to automated vehicles. Those automated vehicles will have artificial intelligence built into them; they cannot be automated otherwise. Therefore, the Secretary of State, who is making the list in the first place, which this Bill provides for—not some other Bill, but this Bill—will be constrained by the terms that the Bill sets for what basis they can use to make the list. That is why the shadow Minister has raised questions about the criteria, and why we are having this debate in the first place. Surely, therefore, we need to empower—I am not suggesting that we in any way oblige—later Secretaries of State to consider, inter alia, whether the machines that they are putting on the list are actually murderously safe or good and safe machines. At the moment, they can decide only whether it is a safe machine. If it happens to be safe in the sense in which Stalin could “safely” eliminate large sections of his population, the poor old Secretary of State would, as I construe it—the Minister has not given us any indication that he has had advice to the contrary—be prevented from—
Order. The right hon. Gentleman is being carried away by his own verbosity. Stalin—
I am, as ever, guided by you, Sir Edward—having already cited your sagacity, I could hardly be anything other. I am delighted that we managed to get Stalin and Hegel into the same exchange. You will not get that in many Committees, Sir Edward. I am thinking about where we might end up, but I am prepared to live with that. It is important for safety, which in the end is a baseline factor, as I think my right hon. Friend will agree. However, there is a point about ethics. The advice I have received is that no vehicles that are not considered safe and ethical will be allowed on the market and therefore are not for consideration on the list.
Safe and ethical. I have received advice; I like taking advice and not taking it. Before I make that my definitive position, I want to reflect a bit. If we were to say no to the advice that was not safe and ethical, I want to be absolutely clear what ethical means. We know what safe means. We can draw on existing practice in respect of type approval. We know what measures of safety are about, but when we get to measures of ethics, we are in an altogether more challenging area. That is why I will reflect a bit on the characteristics. This is an incredibly interesting debate, by the way, and very useful.
I hope we will not take as long on these two amendments as we took on the previous group, although it was a fascinating discussion. The amendments follow on from that, because they relate to the transition period and the third of the five tiers that go from driver-assisted systems to full automation. Tier 3 is where the vehicle can transition from being fully automated to being driven by the driver, and vice versa.
Various pieces of research into the issue have come to different conclusions. In the evidence sessions, we heard that Audi had carried out some research at different speeds and come to the conclusion that there should be a minimum of 10 seconds in that transition period. The Venturer research came to slightly different conclusions, but all the research points to the fact that this is a problematic area in automated vehicle technology. It can take a deal of time for a driver to become alert. Mr Wong described to us various alarms that alert the driver to a vehicle request for the driver to take back control of the car; if those various alarms do not alert the driver, the vehicle will then slowly come to a halt. I am sure that we can all imagine the sort of disruption that could be caused if that happened on a motorway. He even described how the car prepared for an accident by tightening the driver’s seat belt just before the vehicle came to a halt, in case the driver had passed out or was so fast asleep that the alarms did not wake them up. There are various scenarios involving the transition that cause alarm.
Mr Gooding of the RAC Foundation felt that we should not even entertain tier 3 because it is unsafe and does not make any sense, and because the legislation is about moving straight to tiers 4 and 5. Clearly, if people giving us evidence are saying that, I suggest to the Minister that it should cause the Government some alarm, and that perhaps we should be legislating to say that we do not want to allow this on our roads. There are issues being raised about the clear dangers of tier 3 transition.
I, too, note what was said about tier 3, but I hope that the hon. Gentleman is not underplaying his own point. What he referred to in the transition phase also applies to tier 4. It is only at tier 5 that it disappears.
My understanding of tier 4, as Mr Wong said in his evidence, is that it is only at tier 4 that the human is removed from the equation; I think that those were his exact words. I must admit that that seems to be a contradiction. Tier 5, as I understand it, is a fully automated vehicle with no steering wheel, totally under the control of technology. One wonders what tier 4 is. If tier 3 is the transition between human and vehicle and tier 5 is a fully automated vehicle with no steering wheel whatever, what is tier 4? Is it a lesser tier 5 or a greater tier 3? I will give way to the Minister, who is going to enlighten us.
That would be helpful. I have looked at it, but as has been demonstrated in our exchanges, the difference between tier 5 and tier 4 is not entirely clear. From the descriptions of the people who gave evidence to us, in tier 4, the human is removed entirely from the equation.
We need to consider this issue. The evidence that I read said that the Venturer experiment at the Bristol testing centre discovered that drivers, when they first took over, tended to be over-cautious and drive at slower rates, which could increase congestion. There was also the potential for danger in vehicles suddenly slowing down, and Mr Gooding said in his answers to our questions that he felt that that issue was more important than congestion.
There are some important considerations raised by the issue of transition, particularly in tier 3. We asked witnesses, “When will the vehicle decide whether it is safe for the vehicle to drive or whether the vehicle should be handed back to the human driver?” They said that it depended on road conditions. That suggests that it will happen in the same locations on our roads: for instance, as vehicles leave motorways and enter more built-up areas, where there are more potential hazards and dangers for vehicles, it is likely that the vehicles will transition back to being driven by the driver. If that will happen regularly in the same location, it could create accident black spots. We could create a considerable new hazard on our roads.
We eagerly await the Minister’s note, but due to the wonders of modern technology, one can look it up on the web. Level 4 is clearly described as fully autonomous and
“designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip.”
However,
“it’s important to note that this is limited to the ‘operational design domain’ of the vehicle—meaning it does not cover every driving scenario.”
I hope that the hon. Gentleman will agree that the transition question arises in relation to level 4 when vehicles move from one driving scenario to another.
I accept that entirely and agree. It comes back to my point that it is likely to happen regularly in similar locations, and that patterns of behaviour will occur in particular spots where transition occurs because the technology requires it. We need to be aware of that. The testing is telling us that that is happening, but we are not taking it into consideration in the Bill, as we should.
I suggest to the Minister that we need to take that away and consider it. Safety must be the aspect most prevalent in our minds. There is also the moral or ethical issue of driver autonomy: will the driver be in charge of the vehicle, or will the technology be in charge of the driver? In the debate on previous amendments, he said that the technology is superior; he did not use that word, but he said that it is safer than a human in the event of an accident, even suggesting that a vehicle would make better or quicker choices than a human. That points us down a road, if Members will pardon the pun, of having roads operated in the way that our railways or underground service are controlled. Why not have fully automated vehicles of which drivers do not have control at all?