In a comment sent to The Conversation and to The Verge, Uber declined to get into specifics.
But if The Conversation is right, this is pretty bad news for the industry as a whole. In fact, as far as self-driving car technology is concerned, it’s really bad news: the on-board software saw the woman crossing the road, and decided not to take action. Answers to other questions the crash brings up — “Is it too early for fully autonomous vehicles? Is the technology not ready?” — seem to be “yes” (even though it was the software’s tuning, determined by humans, that caused the crash).
But what about the “safety driver,” who was behind the wheel during the crash? According to the >New York Times, Uber’s robotic vehicle project “was not living up to expectations months before [the crash].” And to make Uber’s problems worse, the company decided to cut the number of “safety drivers” from two per vehicle down to one around October of last year. The company even reduced the number of safety sensors before the crash from seven to just a single LIDAR sensor on the roof.
There’s no question that Uber’s fatal crash in Arizona was a huge setback for autonomous vehicle testing, and this most recent report, if substantiated, could push it even further.