Industry Reports

Industry Reports

Feb 5, 2026

Feb 5, 2026

2 minutes

2 minutes

Read

Read

The Dangerous Ideas Behind Early Self-Driving Cars

Yashika Vahi

Community Manager

SELF DRIVING CAR
CAR SOFTWARE
AI PLANNING
SOFTWARE DEVELOPMENT
SELF DRIVING CAR
CAR SOFTWARE
AI PLANNING
SOFTWARE DEVELOPMENT
SELF DRIVING CAR
CAR SOFTWARE
AI PLANNING
SOFTWARE DEVELOPMENT

Table of contents

Share

Imagine a car that is trying to learn how to drive by itself.


Inside that car is a big computer brain made of many parts. One part sees the road. One part tries to understand what it’s seeing. One part decides what to do next. One part controls the steering, brakes, and speed. Together, this is called the driving stack.

The goal of the self-driving industry is simple to say and very hard to achieve:

“Can a computer drive more safely than a human?”

To answer that question, companies began testing self-driving cars on real roads. But there was a rule everyone agreed on: A human will always be there to take over if something goes wrong. That human was called a safety driver. The plan sounded reasonable. The computer drives. The human watches. If the computer makes a mistake, the human steps in. This plan wasn’t made for passengers.

It was made for regulators, city governments, and safety authorities—the people who decide whether self-driving cars are allowed on public roads. The promise was: “We can learn fast without hurting anyone.”

That promise depended on one thing:

the human would always save the moment.


On March 18, 2018, in Arizona, an Uber self-driving test vehicle was driving at night.


A pedestrian was crossing the road. The car’s sensors saw the person. But the software didn’t know what it was seeing. It kept changing its mind:

  • maybe it’s a car

  • maybe it’s a bicycle

  • maybe it’s nothing

Because the system wanted to be sure, it waited. And while it waited, the car kept moving. Here’s the most important planning decision: Uber had disabled automatic emergency braking during testing. Why? Because sudden braking felt uncomfortable. Because false alarms caused jerky rides. Because the plan assumed: “The human will intervene.” But humans are bad at being last-second backups. The safety driver looked down briefly. By the time they looked up, it was too late. The car did not brake. The human did not react in time. The pedestrian was killed. This was the first death caused by a self-driving test vehicle. And it wasn’t random.

Investigations showed the system detected the pedestrian seconds earlier but didn’t act, braking was disabled and human reaction was relied on too heavily. The system followed the plan. The plan failed.

After the crash, everything slowed down. Programs were paused. Regulators stepped in. Companies rewrote their safety strategies. Because the industry learned something painful: You cannot use humans as emergency brakes for fast machines.


The biggest mistake early self-driving systems made was pretending that one very smart computer could be trusted to make all the decisions.


That approach doesn’t work in the real world. Future autonomous driving software cannot be built as a single “brain.” It has to be built in layers. One part of the system drives the car—deciding when to go, slow down, or turn. But another part must exist only to watch the first one. If the driving system starts behaving dangerously, the safety system must be able to step in and stop the car. This safety layer can’t be optional. It can’t be turned off just because it makes the ride smoother. Its entire job is to say, “No. This isn’t safe,” even if the main system disagrees. In simple terms: if one brain makes a mistake, another brain must stop it.

Another hard lesson is about timing. Many early self-driving systems waited too long to act because they wanted to be certain about what they were seeing. They wanted perfect confidence before braking. But real life doesn’t wait for confidence. People cross streets unpredictably. Objects appear suddenly. Situations change faster than software certainty can form. Future systems must flip this rule. If the car is unsure, it should slow down. If something looks strange, it should brake early. Choosing to be cautious should be rewarded, not punished.

Stopping too early might be annoying but not stopping in time is deadly.


Conclusion: how future self driving softwares need to be planned


  • Humans get tired. They look away. They hesitate. They freeze under stress. Planning that assumes a human will save the situation at the last second is not safety—it’s wishful thinking. The car must already be slowing down or stopping before asking a human to help. The human can be a backup, but never the main safety plan. Every autonomous system must know exactly what to do when it gets confused. If sensors disagree, if confidence drops, if the situation falls outside what the system understands, the response must be predictable: slow down, pull over, stop safely.


  • Another critical planning change is hard boundaries. The software must know where it is allowed to operate—and where it is not. Which roads. Which speeds. Which weather. Which times of day. If the situation goes outside those limits, the system must refuse to continue and disengage safely. These limits cannot live in slide decks or policy documents. They must be enforced by code. If the playground becomes unsafe, the game ends.


  • Testing also has to become more honest. Driving millions of normal miles is not enough. Most driving is boring—and most accidents come from rare, strange situations. Future planning must focus on what can go very wrong: bad timing, sensor lies, confusing edge cases, unexpected human behavior.


  • Equally important, autonomous systems must be able to explain themselves. After an action, the system should be able to say what it saw, what it believed, why it acted, and which safety rule guided that decision. This isn’t just for engineers. It’s for regulators, safety reviewers, and the public. Explainability must be built into the software from the start.


  • Finally, the industry must slow down how software changes reach public roads. Learning fast feels exciting, but fast learning is dangerous when mistakes hurt people. New behavior should begin weak, cautious, and limited. Authority should grow slowly as evidence builds. Core behavior should not change frequently, and new logic should not be given full control immediately.

When lives are involved, speed is not progress. Clarity is.

Looking for more? Dive into our other articles, updates, and strategies