On March 23 of 2018, Walter Huang, an Apple engineer commuting to work in his Tesla Model X crashed into a damaged crash attenuator at the off-ramp for Highway 85 from 101 in Silicon Valley. Tesla Autopilot was on, and steered the car into the barrier, accelerating as it approached it. Huang was killed. This triggered an NTSB investigation, which was recently released. A hearing with the full report is scheduled for Feb 25 where final conclusions will be published.
Wreckage after Tesla Autopilot crash in Mountain View
This crash is particularly visceral for me, since I drive the same lane he was driving frequently, and I have several times done so using Autopilot in my Tesla. It’s a piece of highway almost everybody in Silicon Valley drives, and it’s located just a few miles from Tesla Headquarters in Palo Alto, so it is also driven frequently by a large fraction of Tesla engineers.
The NTSB report adds only modest new data, and what is added matches closely with expectations. Tesla Autopilot is advertised as a driver assist system which does not drive in all road situations, and which requires constant driver vigilance because it will, from time to time, make mistakes like this. There is debate as to how much Huang was paying attention, but it is very likely the report will find that he holds the bulk of the responsibility for the crash, because it was his duty to take the wheel if Autopilot failed, and he did not. Nonetheless, the particulars of how Autopilot performed, and how it might have done better, are of interest to the Tesla and robocar communities.
The report also will assign fault to Caltrans, which maintains the highways. On March 12, another car crashed into this same crash barrier, crumpling it. That driver survived. This off-ramp has an abnormally high number of crashes — it is a left exit for the left carpool lane, with a second carpool lane continuing on. Huang was in the continuing carpool lane and his car veered into the “gore,” which is the triangular region where lanes split apart at any off-ramp. Caltrans had delays in replacing the crash attenuator, which is there to absorb impacts by crumpling, and it was not replaced when the Tesla hit it. Had it been replaced, it is much more likely Huang would have survived the crash. There is even some debate about whether the Tesla might have identified an undamaged barrier.
As noted, the report will not find crash responsibility with Autopilot. It performed (or didn’t perform) as it is advertised to not perform. Worse, Huang was aware that Autopilot did not handle this off-ramp correctly, for he had experienced problems with it there before at least twice. This should have caused him to either not use Autopilot there or to make sure he was vigilant at that location. The report also contains speculation on whether Huang was playing a game on his iPhone prior to the crash that might have distracted him. There are also incorrect statements in the report concerning the question of whether Huang had is hands on the wheel or not.
Huang appears to have been cruising in a fairly normal way using Autopilot. The lines on the highway were fairly worn, most crucially the left lane line, which becomes the right side of the “gore.” You can see the lines already fading in this 2016 Google imagery. Apparently the lane-finding technology in Autopilot failed to detect that lane line, and probably detected the line on the left side of the gore (ie. the right lane marker for the off-ramp) and treated it as a new direction for the left line. Tragically, it followed that left line, and once the right line of the gore became more clear, it apparently identified that as a new right line, so that the two lines of the gore, though diverging, were treated as a lane.
At the end of that “lane” is the concrete barrier of the off-ramp, which rises to a fly-over bridge. That’s where the crash attenuator is. The Tesla was heading straight for it.
To make matters worse, the car had been set to cruise at 75mph, but was not going that fast because it was following another car in that right carpool lane. (It was after carpool hours, but electric cars in California can get a sticker that lets solo drivers use the carpool lane at rush hour.) The Autopilot was set to follow at the closest distance (0.9 seconds) that Tesla allows. As soon as the car decided its lane went to the left, there was no longer a car in front of it. Freed, the cruise control started accelerating the car up to the set speed of 75mph — right towards that concrete barrier.
As noted, there is normally a long metal crash attenuator in front of the concrete, and if you hit it, it crumples and lessens the impact. It was already crumpled, and could not do that job. In addition, it no longer looked to the camera like a non-crushed barrier would, which may have made it harder to identify. It would still have provided good radar returns, but for reasons outlined below, radar returns from stationary objects are often not used by ADAS systems. In any event, none of Tesla’s systems determined the car was in the gore and heading for a barrier; in fact they decided they were on a nice open lane and sped up. Though he had had this happen twice before and had grabbed the wheel to steer back into his lane, this time he did not, and died.
Hands on the wheel
To use Autopilot, you must regularly apply a small torque to the steering wheel to prove your hands are on it. If you go too long without doing that, you get warnings, and it eventually issues alarms and starts slowing the car. It is quite normal for Autopilot users to get the 1st level alarm about not applying torque for too long. This alarm is visual, and most drivers respond by putting a bit of pressure on the wheel. Huang had Autopilot on for 19 minutes before the crash, and got 2 visual and one audible warnings, all shortly after engaging it. He had a period of about 30 seconds of no torque about 2-3 minutes before the crash. Leading up to the crash he was applying fairly regular torque, but did not in the 6 seconds prior to the crash.
It only took about 7 seconds from the time the vehicle steered left into the gore until the crash. What’s odd is that Huang torqued the wheel 6 seconds before the crash, when it was already steering into the gore.
The report repeatedly uses an erroneous vocabulary in describing Tesla’s monitoring system. Tesla’s system is unable to detect “hands off” the wheel. Instead, it detects torque. When there is torque, you have hands on the wheel, but it is also common to have hands on the wheel and to detect no torque. It can only detect “Hands on” and “Not known if hands are on or off.” The report should not write that there was a determination he had hands off the wheel. Some Teslas do have a camera facing the inside which could make this determination but this is not currently done.
Forensics revealed Huang had the game “Three Kingdoms” on his iPhone and it was transmitting data to servers during his drive. There was a transmission one minute before the crash, though it is not detailed if that was a Three Kingdoms transmission. Three Kingdoms is a “both hands on” strategy game, though it could be played with one hand. The report does not provide information on how much the game transmits data when you are not interacting it, or what else might have been transmitting data — it’s very common for phones you are not touching to still transmit data for things like received emails, navigation or other background tasks.
That said, if Huang did play this game while driving, it is a sign he was misusing Autopilot, and makes it more credible that he might have kept his eyes off the road in those crucial seconds.
Veering into the gore
Lane following systems are known to make mistakes with faded and badly painted lane markings, as was the case here. Tesla’s system could certainly have been smarter about this — the geometry is not one that a more aware system would interpret as a lane steering to the left. More interesting, however, is that a technology that Tesla has explicitly denigrated, namely detailed lane-level maps, could have prevented such an error. If Tesla had maps which described the lanes in that region, in particular the presence of a gore, and a map showing that the right carpool lane continues straight while the left carpool lane goes left, the car could have decided that tracking the right lane marker rather than tracking the left-curving misidentified left side of the gore would have been the right decision. This is not trivial, of course, since road lines get repainted and you can’t blindly follow the old lines found in a map. Nonetheless, the map can tell you that things have changed and make you wary.
Further, a map would reveal the location of the concrete barrier and crash attenuator. While lane lines move, bridges rarely do, and at no time should a car drive into a location which has a concrete barrier unless it gets a positive confirmation that the barrier is gone and the road now goes through it. Generally, a car should not even drive there with such confirmation, but instead pause and ask for assistance rather than plow ahead, as the removal of permanent structures is extremely rare.
Plowing into a barrier
Normal and crumpled crash attenuator
Even though the car did not realize it had gone into the wrong lane and was heading for a barrier, its perception system also failed to detect the barrier. Its cameras would have captured a clear image of the barrier, and its radar would be getting many radar returns from the barrier and other objects in the environment. That is the problem — an area like this is full of returns from stationary objects. Radar tells you the object is stationary, but the whole world is stationary. Radar has very poor resolution — almost no resolution up and down, and about 5 degrees at best in the horizontal. If you were driving in the normal lane, you would expect to see radar returns just to the left and ahead from the barrier and the climbing off ramp. You can’t brake just because you see those — it is normal to see them. The radar will have been reporting that the obstacle was right ahead instead of slightly to the left, but the accuracy must not have been enough for the Tesla to engage emergency braking. This has been a theme in several other Tesla crashes.
Once again, a map could help here, to characterize what radar returns should be expected in this area, and where. Even so, the low resolution of radar may make even that ineffective.
Cameras have no such problem with resolution. The barrier would be clear on the camera. The problem is that because a crumpled barrier is an unusual thing, Tesla’s neural networks were presumably not trained to identify it. It doesn’t look anything like the things it is on the lookout for, like the backs of cars and trucks. In particular, while a normal attenuator has yellow and black caution markers on it, the damaged one appears to not have had them. For whatever reasons, Tesla’s networks did not identify it as an obstacle.
Tesla has since improved their system to better detect unknown obstacles. Neural networks won’t identify things they have never seen, but there are techniques to notice things in computer vision, from clues like motion parallax or how they change as you move. Indeed, Elon Musk, when denouncing LIDAR, often claims that Tesla is close to being able to get the depth of every element in a camera scene the way LIDAR is able to inherently. Such an ability would, if perfect, always identify an obstacle, even one never seen before, but Tesla is not yet at that level.
The combination of unusual radar signals and the unidentified possibly stationary object should have been enough, but they were not. Tesla Autopilot, particularly in 2018, is still a work in progress, and still far from being able to always identify such things, which is why they sell it as an ADAS tool and not a self-driving system. At the same time, Elon Musk expressed high confidence that he would ship a “feature complete full self driving” system in 2019. Tesla did not do that, though it has released some functionality to beta testers. Any such system would need to perform extremely well in dangerous situations like this accident.
Most of the blame will probably be put on Huang. He knew Autopilot couldn’t handle this off-ramp. He had had to grab the wheel at least twice there. He complained about it. He drove it again — and just possibly was even playing a game on part of that drive. Autopilot is an ADAS system, not expected to see everything, expected to make mistakes and need driver correction, which it didn’t get.
Caltrans also could have done better, replacing the barrier sooner, and painting the lines — but the reality is that lines are going to fade, and crash barriers are never going to be replaced instantly, so robocars and ADAS systems have to deal with this. The gore is now brightly painted with added chevrons, and I have driven by it in Autopilot many times with confidence — in fact if there’s one place the Autopilot is not going to have a problem again, it should be here.
It should be noted that this was the investigation in which Tesla was kicked out of participation because they would not follow NTSB rules on staying silent. It was strongly suggested by the NTSB chairman that Elon Musk actually hung up the phone on the chairman at one point. I have sympathy with both sides — the NTSB needs to be able to conduct their investigation independently, but Tesla is a company that lives on internet time, and 2 years of people wondering how and why somebody died using Autopilot is not something they would be fond of.
Tesla won’t get blame, but there are lessons about how it can do better, and what it probably should do in building its supposed “full self driving” offering and eventual “real true actual fully full self driving” that is as yet in the more distant future. After this accident, Tesla improved their reading of faded lane markers like this, and their detection of stationary objects, though they have still had subsequent crashes into surprising stationary objects because it’s hard to detect those 100% with just cameras and radar, and Tesla is committed to using just those approaches.
In fact, a year later another Tesla would crash into the broadside of a truck in Florida again. While the truck was moving across the road, this is treated like a stationary object by radar, and similar factors may play a role there. The NTSB just released their preliminary docket but it does not yet contain analysis of what the Tesla Autopilot did during that accident.