Former Uber Self-Driving Exec Crashes Tesla on Autopilot: A Cautionary Tale

27

The former head of Uber’s self-driving program, Raffi Krikorian, recently crashed his Tesla Model X while using Full Self-Driving (FSD) mode, driving into a concrete wall. The incident isn’t just about one collision; it exposes a critical flaw in the current state of automation. Modern driver-assist systems demand instant human intervention when they fail, yet simultaneously lull drivers into a false sense of security. This uneasy balance raises questions about responsibility, psychological effects, and the unavoidable risks of early-stage autonomous technology.

The Crash and the “Moral Crumple Zone”

Krikorian describes the incident in The Atlantic : his Tesla unexpectedly jerked the steering wheel during a turn in a residential area, sending the car into a wall. No one was injured, but the experience highlighted a dangerous pattern. He frames it through the lens of researcher Madeleine Clare Elish’s concept of the “moral crumple zone ” – the idea that when automation fails, humans absorb the blame, even though the system was in control.

Tesla, like other automakers, legally positions drivers as ultimately responsible for autonomous features. The company warns that these systems aren’t perfect and require immediate driver takeover. However, the issue goes beyond legal liability.

The Psychology of Semi-Autonomy

Krikorian argues that semi-autonomous systems create a psychological trap. They perform well enough to discourage active driving but not well enough to eliminate the need for human attention. This leads to vigilance decrement – a known phenomenon where attention drifts when monitoring systems that rarely fail. The result? Humans become less prepared to react when an unexpected event occurs.

The problem is physiological, too. Even in peak condition, humans need seconds to refocus, decide on a course of action, and execute it. This lag makes instant takeover unrealistic in many failure scenarios. The technology relies on humans to save the situation, but often holds them accountable when that rescue fails.

An Unavoidable Phase?

The current stage of autonomous technology requires real-world testing, which means accepting imperfect systems that demand immediate human intervention. The better these systems become, the easier it is to forget who’s truly in charge. Crashes serve as brutal reminders of this reality.

This middle ground—where automation works well enough to build trust but not well enough to eliminate risk—may be unavoidable for now. The challenge lies in acknowledging this limitation and mitigating its psychological and physiological consequences before further collisions occur.

Попередня статтяAutocar Podcast Highlights: Land Rovers, the Manosphere, and Tesla Skepticism