Automated and Electric Vehicles Bill (Second sitting) Debate
Full Debate: Read Full DebateClive Efford
Main Page: Clive Efford (Labour - Eltham and Chislehurst)Department Debates - View all Clive Efford's debates with the Department for Transport
(7 years ago)
Public Bill CommitteesQ
David Wong: We would suggest a nationally co-ordinated approach.
Brian Madderson: I speak for 75% of the motorway service areas and the one thing that they are really against is any form of mandating, because they want the market to be able to choose what is the best form of charging at the time for them. This is in a great state of flux. Some of them have already entered into agreements that are more binding than perhaps they would have wished with the knowledge that they have just 12 months on. The mandating process seems to be all stick and no carrot. These motorway service areas fully recognise the need and, in fact, many now have both Tesla charging and other forms of charging, so they are working towards that but they think mandating is not appropriate in this case.
One of the other issues the motorway service areas have is that there does not seem to be joined-up government, which I think David was probably referring to. There are planning difficulties in getting car park extensions to put in extra parking bays for Tesla charging, for example. One of the things the Government should perhaps be mandating is not where the charging points go, but that where there are planning applications for charging points, local authorities must deal with them quickly, efficiently and sympathetically.
Q
Steve Gooding: From a consumer perspective, I would have to say that we do not really know yet, but there is a broad spectrum of what might happen next. For example, there is a clear incentive for a fleet operator who is counting every penny to be thinking, “How could I reduce my costs of operation?” Whether that is a fleet of vans or trucks, the operator would be looking at automation as a way of, first, saving money, and secondly, sweating the asset of that truck for longer hours. In turn, we are seeing a huge amount of investment in the auto sector in vehicles for the private market.
If I were to bet my money, I would say that the guys who are counting every penny will probably be the first in—people running fleets and large numbers of vehicles—but some people are clearly very attracted to the thought of having driverless capability. That could be from time to time, or it could mean freedom and independence for people who are currently denied that by the fact that they cannot drive, and we have just been engaged in a report on what it means for people with disabilities.
Q
Steve Gooding: I think David would say we are four years off. Personally I think it is probably nearer 10.
Steve Nash: Ten.
Q
David Wong: Correct. In the first instance, when I referred to 2020-21, I was referring to level 4—vehicles that will still have a steering wheel. That means under the right conditions, in the right use cases—for example, from junction to junction on a motorway—someone could let the system drive the vehicle, but could take back control outside that use case. If level 5, which is without a steering wheel, is not going to be as far off as 10 years, it is likely to be deployed in the first instance for first and last-mile journeys, perhaps even in pedestrianised areas—on pavements—as we have seen with some of the trials in Greenwich, as well as in Milton Keynes. As to when those level 5 vehicles without steering wheels are capable of performing end-to-end journeys—from my house in the village to my office in the city—that is anybody’s guess. That will probably be some time in the 2030s. It is quite complex.
Q
David Wong: I suppose you could—
Q
David Wong: Yes. In principle, one would not argue that a computer is less safe than a human being. Obviously, the capability of a human being to perceive and perform the driving of a car is limited and depends on the human being’s condition and the road conditions, as well as the environment in which the human being has been conditioned to perform the dynamic driving task. Lots of evidence has been published. The figures range from 90%; some are at 97%. We are taking the average figure, which is that 94% of all serious road accidents involving fatalities are caused by the human being. I mean that in the sense that it is not mechanical fault, lack of road markings or slippery roads, but the human being that caused the accident, perhaps by being inattentive or sometimes even perhaps by doing things that they are not supposed to do.
But even the slow-moving vehicle in Greenwich hit a plastic chair when it was put in front of it, did it not? We are going to see accidents during a journey where the vehicle is being driven by software. Those accidents are going to happen. The periods when a vehicle is not driven by a human are going to increase, so we are likely to see an increase in the number of accidents that are not human error. Is that right?
David Wong: We think that overall the number of accidents will fall, but if anything can be learned from one of the trailblazers of the self-driving car experiments and trials—Google—it is that the earliest accidents that they encountered a number of years ago when the car was being trialled were the result of the cars being rear-ended by manually driven vehicles. The learning from that was that Google had to tweak the algorithms to ensure that the self-driving vehicle—the computer—behaved a little bit more like the human being. They succeeded in doing that, and today you do not get so many of the rear-ending accidents.
Steve Nash: It is also important to say that these vehicles will be connected. When one experiences something, the knowledge is passed to all of them, which does not happen today.
Q
David Wong: This is the classic trolley problem question that we get asked almost at every single conference that we attend—
Q
David Wong: Not at this point, but at some point certainly. First, if you take a cue from the ethics commission report that was published in Germany just a few months ago, it suggested that in any case, human life should always be prioritised. If it is a decision between a human and non-human, obviously the human life would have to be prioritised. That is No. 1. Secondly, we should not expect the car to do anything massively different from how a human being would behave. The car should perform a minimal risk manoeuvre to stop and brake in such a way that the impact will be minimal. To expect the car to make an ethical decision to kill A or B is probably not the right approach. I would suggest that none of us has the divine power to decide who to kill. At the end of the day, someone who writes the algorithm will have to decide. If you insist that the car must decide, it is incumbent on the engineers to programme that into the algorithm.
Q
David Wong: There would be a minimal risk manoeuvre, depending on the situation. There may be evasive action in such a way that it would be the safest possible option. If it needs to stop, it will brake and stop. May I point something out? I mentioned autonomous emergency braking. It has been demonstrated that the technology is improving all the time. Previously, autonomous emergency braking worked perfectly at 30 mph, which is urban speed, but it is becoming increasingly sophisticated. AEB can work well even at 50 mph. It would not surprise me if the technology improved in years to come to the stage where autonomous emergency braking could kick in at motorway speeds of 70 mph to prevent an accident or lessen the impact of an accident.
I have a growing list of people who want to ask questions, and I want to try to get everyone in. We want brisk questions and brisk answers. It is not necessary for every witness to answer every question.
You certainly look good for having done 50,000 miles “under” the wheels.
Quentin Willson: Absolutely!
Q
Quentin Willson: Enormously complicated. It is not my area of expertise, but the question I would ask is: can they co-exist peacefully? Can the connected and the unconnected in the UK’s very limited road space exist? Can those cars that drive themselves be allowed to co-exist with the cars that are driven by human beings? Will there necessarily be some friction during that period? I think that in the short to medium term, it is going to take some time.
Q
Quentin Willson: I think we need to be very careful that we know exactly who is liable, because there will be quite a few accidents, whether it is the manufacturer, driver, network provider or road provider. It has to be established very early on.
Q
Quentin Willson: Inevitably you will get a feeling of complacency, of reliance on the technology, and if there is an emergency situation or you leave the automated road system to the non-automated road system, you will have to have that moment of what we call extreme alertness. Consumers need to be trained for that and we need to be ready. If that is a legal transitional moment, where you take the wheel having been driven autonomously, that could be an issue as well.
Q
Quentin Willson: I do not think that artificial intelligence will ever be trained to be able to make those moral decisions, and when we take a driving test we are not trained to make them either, so it is a difficult area to think we can resolve. Can we ever expect artificial intelligence in an automated car to make that split-second moral decision between the child in the pushchair or the old people in the Nissan Micra? I do not think we can. We are not trained to do that and we cannot. It is a split-second thing that happens and legislating for it would be enormously difficult.
Q
Quentin Willson: I am not an expert on artificial intelligence in cars at the moment, but it will be, depending on the sensors, the object that has the least resistance.
Q
Quentin Willson: It is driven, I guess, by the fact that there is a huge world of opportunity here and that is predicated on the fact that people do not like driving anymore—there is congestion, it is expensive and it is difficult—and on the rent economy, whereby you summon an automated car on your smartphone and it comes to your door. When you look at the research, that is very attractive to the public. The golden era of getting pleasure from driving cars has gone, and I say that with some regret, but it is a fact. There was a survey by Catapult in Milton Keynes which asked this question: if you were to replace your current car with an autonomous car—we are not going to tell you what it is or what it looks like—would you be prepared to change to that autonomous car? Some 58% said that they would change to the autonomous car without knowing what it was, simply because of the liberation of not having to make those decisions and sit impotently in snarling traffic. It is partly driven by commerce and partly by the public.
Q
Quentin Willson: I sat before this Committee a year ago and was broadly optimistic about the short and medium-term future of electric cars. I think Michael Gove’s announcement in July, coupled with Sadiq Khan’s T-zones and ClientEarth’s relentless pushing on air quality issues, has terrified consumers. It has wiped probably £30 billion off the value of diesel cars. Lease companies are now looking at a collapse in the residual values of the cars that they lease to consumers on personal contract purchase. We are looking at a real issue in the short to medium term.
The consumer now feels that he or she cannot buy a diesel car; we have seen sales of diesel cars absolutely collapse over the last quarter. They are feeling, “Right, I’ve got to buy an electric car.” We need to manage their expectations. I am quite concerned that people who rely on one car as the family vehicle will go out and buy, like me, a second-hand Nissan Leaf for £10,000. That is great, but we must understand that those cars’ ranges are nowhere near viable for an everyday, use-it-all-the-time car. They are a wonderful urban solution, but long journeys—anything more than 100 miles—are really difficult. I came down here in an alternative car; I had to leave my Nissan Leaf at home, because getting here would have required three stops to charge.
It is about managing consumer expectations. Otherwise, this whole thing will go horribly wrong. The new Nissan Leaf, which I saw in Oslo last week on its launch, has a quoted official figure of 235 miles to one charge, but the Nissan engineers tell me that in reality, it is 175 miles for everyday driving. If you drive that car on the motorway at 70 mph, that will fall to about 130 or 140 miles. The technology of the lithium ion battery still has some considerable work to do.
Again, it is all predicated—the mass adoption of electrification in the short to medium term—on having better battery density, maybe of alternative materials such as graphene, and a very robust charging infrastructure network. I am not talking about on-street chargers; I am talking about charging hubs like petrol stations, with 20 rapid chargers that can charge 20 cars in 40 minutes. That is the only way that mainstream consumers will be able to do any form of distance. They are wonderful for town work, but if you are doing more than 100 miles, you are still compromised.
Q
Stan Boland: Safety is the start and finish of whether we can bring these cars on to the streets. A huge amount of attention will be focused on making these vehicles safe, in our case, for use in urban environments, where we will have all sorts of obstacles and agents with all sorts of different behaviours. That really centres on having systems that are able to perceive what is in the scene accurately in 360° and three dimensions and classify what those objects are.
This also talks to predicting what will happen next. We actually have to predict human behaviour, and we have to learn what those behaviours might be ahead of time. Our vehicles will certainly have to be state of the art for perception, but they will also have to be very good at predicting human behaviours. In the case where we identify an object and can tell, just like a human can, that this person, cyclist or whatever it turns out to be has a certain type of behaviour, we will have learnt those ahead of time, and if we are not sure, we will have to propagate that uncertainty through our software and slow down.
The behaviour of these vehicles will be slightly different to that of human drivers, but it will be possible to attain the levels of human safety, and in the long term surpass them, by applying technology. Our systems can pay attention in 360° all the time, and that makes it a bit different to human drivers.
Q
Stan Boland: We are kind of hoping that we can operate at normal driving speeds. To be able to do that, it is important that we can predict behaviours. We cannot have a system that is collision-avoidance only, because that would result in frozen robots all over the city and would make congestion worse. What we humans do is anticipate human action. We actually run more than one world in our heads, and are constantly looking to see whether that world is turning into reality or some other world is going to happen. That allows us to merge on to full lanes of traffic, for instance. We cannot just have a system that is collision-avoidance only, because we would make traffic worse. The idea is that we are operating in normal streets with normal road signs at normal road speeds and obtaining and exceeding human levels of safety.
Q
Stan Boland: At that point it is a trial, so there is a safety driver in the car. The safety driver is able to take control of the vehicle immediately.
Q
Stan Boland: Yes. The safety driver has to be there, literally able to take control of the car instantly.
Q
Stan Boland: You are describing what is called level 3 autonomy, which is a system where the car is under automated control and then there is a warning to give a human driver time—there is a debate about what that warning time should be—and then the human is meant to take over. We think that system is intrinsically unsafe. It is much better if either the human is in control or the system is in control—that is a fully automated, level 4 or level 5 system. We are building a system where the cognitive capability of the car is in control, but for the purpose of testing, until it is actually legal to offer that service, there will always be a driver in the car who can take over instantly.
Q
Stan Boland: While we are testing it. We are talking about a period when we are testing the capability of the vehicle in our existing cities. It is level 4—a highly automated or fully autonomous system—but for the period between now and a regulatory capability of doing this and, moreover, underwriting the risk of it, we have to have a driver in the car to take over.
Q
Stan Boland: As long as there is a safety driver who can take over the car. That is not the same as somebody watching a Harry Potter movie while the car is self-driving. We are talking about a qualified driver who is paying full attention to the road scene all the time and can take over.
Q
Stan Boland: No, that would be a definition of level 5 in our parlance: something that could literally drive anywhere on the planet and be able to work out what every object was, what the semantics of every scene was, and the human behaviour in that part of the world, so we are definitely not saying that.