Most of the hype surrounding autonomous cars is about their ability to let you drive better—or more accurately, drive for you. The aim, at least, is to reduce crashes and injuries due to human error. But, is this really true and how far do we have to go before autonomous cars begin filling up the roads?
Let’s take a closer look …
We Are Bad Drivers
The stats are undeniable. Despite driving being one of the most common modes of modern transport, humans are really bad at it and cause many deaths. In fact, according to the World Health Organization’s latest data (2015), there was over 1.2 million car-related fatalities worldwide that year!
Furthermore, the vast majority of car crashes are caused by human error. However, to determine whether driverless vehicles can do better, we’d also need to know the rate at which humans have been able to avoid crashing. Then with rigorous testing that replicates the real world, we could compare it with the emerging technology properly.
Remember: autonomous cars will have to be able to deal with human drivers on the roads until such a time when all cars are replaced with driverless models. It seems likely that it would be easier for robots to deal exclusively with other robots, than humans—some of which will still be riding around in old rust-buckets unless they can afford a loan to upgrade.
Not Much Data
There’s an assumption that driverless cars are safer because their response to the road and its surroundings is essentially a mathematical equation, which can be computed faster than humans. That’s theoretically true, but the car would need to account for all the possible variables that humans can.
Furthermore, actual real-world data is hard to come by. Human-driven stats are plentiful, from dangerous roads and weather conditions to a clear day on the highway. Driverless cars have mainly been tested on straightforward, multi-lane highways, where the main focus is maintaining speed, not going in the wrong lanes, and avoiding other cars—a fairly simple task, even for humans.
But, what about the Newfoundland’s Wreckhouse, that could see your car blown off course or caved in by a moose?
There are a lot of aspects to driving!
So, while the inherent advantages of autonomous cars are fairly obvious—they don’t get tired, angry, or suffer from any other human emotions that can impact driving. They also don’t get drunk, commit suicide, or carry out terror attacks—they are not yet advanced enough to respond to the ambiguous and uncertain road events that humans face every day, in conditions and roads of many kinds. They also don’t think ahead or anticipate like humans.
Could a driverless vehicle have the human knowledge to react appropriately in the middle of an LA riot? Can it judge the best course of action to protect the life of the passenger and those surrounding it?
Unforeseen Problems
We’ve all heard the typical sci-fi paranoia about autonomous cars getting hacked or going rogue and causing untold numbers of casualties, and while there may be an element of logic in the possibility, there are problems we haven’t even conceived of that will invariably affect the technology.
After all, look at the history of aviation. When automated technologies were introduced, accidents increased for a period of time as the kinks got ironed out. Do you want to be the crash test dummy in the tech giants’ driverless experiment?
It’s hurdles like this that the general public, politicians, and driverless automakers will need to face before we even get to the point of adequate safety. It may only take a few high-profile deaths before we all collectively yell … nope!
The Future
In light of this, getting a true measure on whether autonomous cars will drive better (safer) than humans will take time and a lot more testing. It will also have to account for the mix of human and driverless cars on the roads at the same time.
The good news is that as the technology advances, safety in human-driven cars will increase. It’s only logical that we adopt all of the crash-prevention mechanisms while still maintaining overall control.