A traffic accident in San Francisco this week has put serious doubt on whether the local government made the right decision to allow Robotaxis on public roads. A hit-and-run incident that left a pedestrian gravely injured in the city is raising questions about whether autonomous vehicles (AVs) can handle the unexpected as well as, or better than, human drivers. We actually posted a story about a grassroots movement rebelling against robotaxis in late August. And it appears that this issue keeps going from bad to worse.
The incident involved both a human-driven car—which made the initial impact with the pedestrian—and a Cruise AV, which then also struck the victim a second time. According to the company, the pedestrian in question stepped onto the roadway from the right, crossing the path of the approaching robotaxi, which was already tracking her movements using its multiple cameras. She had cleared the right lane where the robotaxi was traveling and was almost halfway across the intersection when she was struck by the human-driven car in the left lane. After the initial impact, she was flung back into the right lane, where she was run over by the robotaxi.
Driverless cars need to do three things: See their environment, predict what’s about to happen, and decide what to do. Prediction is the most challenging aspect because AVs lack the instincts to foresee an outcome when compared to a human driver.
The expectation that driverless cars will be safer than cars driven by humans is basically the selling point of this technology. But as they roll out in more and more cities, a strong of accidents in cities like San Francisco and Austin has put the public on edge.
And this most recent incident is likely going to exacerbate that lack of trust, regardless of who is to blame. And it begs the question, would a human driver have behaved differently? Video footage shows the pedestrian in the crosswalk being struck by a human-driven car, then bouncing off that car’s windshield and into the path of the robotaxi.
In a statement, Cruise said its robotaxi “braked aggressively to minimize the impact” but was unable to stop before rolling over the woman and coming to a halt. Police are still investigating the cause of the accident, but their main witness could wind up being the robotaxi itself.
Several questions have arisen. Even though its lane was clear, should the Cruise vehicle have been more cautious, knowing a pedestrian was crossing against the light on a busy street at night?
“When you see something bad happening on the road, a reasonable driver would slow down as a precaution,” said AV expert Philip Koopman, an associate professor at Carnegie Mellon University. “A reasonable driver doesn’t say, ‘That’s not my problem. I’m just gonna keep going.’”
Self-driving cars can only make decisions based on what they’re programmed to do. AVs aren’t coded “to anticipate a human projectile coming from the other way,” said Jennifer Dukarski, a Michigan lawyer specializing in AVs.
To be honest, I’m not surprised at this outcome. It had to happen. But is there a solution? Maybe. Instead of being based on lines of code with specific instructions for a set of possible scenarios, the latest version of Tesla’s “Full Self-Driving” technology is teaching itself how to drive by processing billions of frames of video of how humans actually drive and using AI to do it. There’s almost an infinite list of things that can happen on the road — which is why teaching AVs to prepare for the unexpected is so difficult. And because of that, I don’t believe we will see AVs used in mass on public roads in my lifetime.