What Dat?

The NTSB report includes a second-by-second timeline showing what the software was "thinking" as it approached Herzberg, who was pushing a bicycle across a multi-lane road far from any crosswalk:

  • 5.2 seconds before impact, the system classified her as an "other" object.
  • 4.2 seconds before impact, she was reclassified as a vehicle.
  • Between 3.8 and 2.7 seconds before impact, the classification alternated several times between "vehicle" and "other."
  • 2.6 seconds before impact, the system classified Herzberg and her bike as a bicycle.
  • 1.5 seconds before impact she became "unknown."
  • 1.2 seconds before impact she became a "bicycle" again.

It saw her five seconds before it ran her over.

Two things are noteworthy about this sequence of events. First, at no point did the system classify her as a pedestrian. According to the NTSB, that's because "the system design did not include consideration for jaywalking pedestrians."

Second, the constantly switching classifications prevented Uber's software from accurately computing her trajectory and realizing she was on a collision course with the vehicle. You might think that if a self-driving system sees an object moving into the path of the vehicle, it would put on its brakes even if it wasn't sure what kind of object it was. But that's not how Uber's software worked.

The system used an object's previously observed locations to help compute its speed and predict its future path. However, "if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories," the NTSB reports.

What this meant in practice was that, because the system couldn't tell what kind of object Herzberg and her bike were, the system acted as though she wasn't moving.

...

A 2018 report from Business Insider's Julie Bort suggested a possible reason for these puzzling design decisions: the team was preparing to give a demo ride to Uber's recently hired CEO Dara Khosrowshahi. Engineers were asked to reduce the number of "bad experiences" experienced by riders. Shortly afterward, Uber announced that it was "turning off the car's ability to make emergency decisions on its own, like slamming on the brakes or swerving hard."

Don't worry, the software driving the car that's coming at you will have been written by uncompromised omnipotent super-humans who won't overlook anything.

One thought on “What Dat?

  1. Mark Low

    The only places I can think of that would be suitable for self-driving cars are huge, sprawling corporate campuses where ferrying people back and forth between buildings is the norm, and the grounds of airports, where the likelihood of encountering a pedestrian is zero.

    The idea that they are suitable in places where people actually live is so ridiculous I can’t believe these articles even entertain that premise.

    In my Brooklyn neighborhood, the expressway runs directly over Park Avenue. If you start your car on Park Avenue, Google Maps thinks you’re on the expressway.

    I feel like self driving cars would have been a great thing to plan for in like…. 1900? Because the roads weren’t developed with them in mind.

    But now, this technology is not about cars at all. It’s about investment dollars. That seems to be the missing component in every discussion about their purported “safety.” Even the provided quote from Uber shows us how easy it is to court those dollars. They have a team. So they put together a team that cost x, and likely courted 4x in investment dollars. Lawsuits will probably also cost x. So it was profitable, regardless of outcomes.

Comments are closed.