The vast majority of car accidents are due to human error—different studies place the percentages between 93 and 99 percent. Fatigue, distraction, drug or alcohol impairment, or just being a lousy driver cause nearly 1.3 million deaths worldwide every year. The United States is home to 37,000 of them. Cell phones cause a whopping 26 percent of US car accidents. It’s almost astonishing that millions of unpredictable humans are licensed to pilot what Tesla CEO Elon Musk has dubbed a “two-ton death machine.”

But what if humans weren’t behind the wheel? Musk says the autopilot program of his self-driving electric Tesla reduces the risk of car accidents by 50 percent. Tesla’s competitor in the market for autonomous vehicles is Google, who agrees. The tagline for their project reads, “Imagine if everyone could get around easily and safely, regardless of their ability to drive.” Google’s fleet of self-driving cars has logged well over a million safe miles. Until recently, the negligence of other (human) drivers caused all of their 17 minor accidents.

When Google asks you to imagine something, it usually looks pretty good. A future that eliminates human error in driving is no different. Thousands of lives and hundreds of billions of dollars would be saved. Transit would be efficient, green, accessible, and safe. And while this vision may be enticing, the technicalities are still foggy. For example, who’s to blame when something goes wrong?

The New Industry Stumbles

On February 14, 2016, one of Google’s self-driving cars collided with a city bus in Mountain View, California. The vehicle’s autopilot program was at least partially responsible for the crash Both the car and its “driver” assumed the bus would yield. The bus assumed the car would yield.

Though only a minor fender-bender at about 2 mph, this accident was the industry’s first blemish. Google called it “a classic example of the negotiation that's a normal part of driving.” It was also a wake-up call that cars can be as fallible as humans in that negotiation.

And on May 7, 2016,Tesla owner Joshua Brown died in an accident while using his car’s autopilot technology. Brown was traveling down a Florida highway in his Tesla Model S when a long white truck turned left and crossed his path. Neither Brown nor his Tesla distinguished the truck from the brightly-lit horizon. Because it detected no obstacle, the Tesla plowed into and went under the truck. It then hit a fence and a power pole on the other side. Brown died instantly.

Tesla’s Autopilot program is a “public beta.” The technology is still in development and only sold to a limited number of users. Google owns all its self-driving cars; it’s working on the technology to sell to auto manufacturers, not become a carmaker itself. But even though these car accidents occurred during developmental, they’ve gotten lots of people talking about who’s to blame when both a computer and a human are at fault and someone gets hurt.

No Brakes on this Booming Industry

Tesla CEO Elon Musk has said that in the future any cars that “don’t have full autonomy will have negative value. It will be like owning a horse. You will only be owning it for sentimental reasons.”

This is a bold statement. But is it improbable? Many would argue it’s not. The industry is moving forward at a brisk clip. Tesla and Google are the most visible names designing autonomous vehicles, but they’re not the only ones. Dozens of other companies are also throwing their hats into the ring. Mercedes Benz is making luxury models. GM has partnered with Lyft to offer self-driving taxis. And then there’s the 2014 IHS study that forecasts self-driving cars will number 11.8 million by 2035. It also predicts most vehicles on the road will be self-driven by 2050.

The burning question of self-driving cars is no longer if they’re possible, but who is responsible for the inevitable car accidents. Google’s scrape with a city bus and Tesla’s tragic fatality have lawmakers, consumers, and insurers scrambling for an answer. But how can we regulate an industry that’s evolving faster than legislation tends to crawl?

New Laws and Regulations

Twenty-eight states have passed legislation that specifically addresses self-driving cars. Interestingly, most of these bills and laws were written to encourage their production or use. Some states require a special permit or license, but in general lawmakers have worked to make it easier to get these autos on the road.

National policy is following suit. The Obama administration has asked the National Highway Traffic Safety Administration to establish national regulations for self-driving cars by the end of this summer. It also proposed a $4 billion plan to test and prepare vehicles and roadways for self-driving cars over the next 10 years.

Why are lawmakers so interested in this industry? One likely reason is the impact it could have on human lives. It’s worth noting that though Joshua Brown’s death was tragic, 3,500 people die on the road every day worldwide. It would be silly to stall technology that could prevent many of those deaths due to one accident.

The Future of Car Insurance

Self-driving cars seem to be good news everyone can agree on, from corporations to Congress to Mothers Against Drunk Driving. Everyone except auto insurers, that is.

Car-accident liability in America has always been a bit of a blame game. Whoever causes an accident is held responsible for its cost, from medical bills to car repairs. But what happens when the driver, manufacturer, and designer each have a stake in the fault?

Volvo president Håkan Samuelsson announced in October 2015 that Volvo will assume full liability for car accidents that occur while using the autopilot system. He also named the US the global leader in self-driving technology, a position it may lose if it creates “a patchwork of rules and regulations.”

Erik Coelingh of Volvo's technical team has said that Volvo’s self-driving cars will have so many backups and controls that a human driver should never need to intervene. “Whatever system fails,” he says, “the car should still have the ability to bring itself to a safe stop.” And if it doesn’t? That’s on Volvo.

Google and Mercedes quickly followed Volvo’s lead. It seems manufacturers are willing to assume more responsibility in order to make autonomous vehicles available in a litigious country like the US.

Even if they haven’t explicitly accepted liability, most studies suggest fault will lay with the car companies, not drivers. In addition, Germany is working toward making it easier than ever to blame autopilot systems. In-the-works legislation would require the country’s carmakers to install a “black box” in autonomous vehicles to record when and how the autopilot system was working in the event of a crash. It’s likely these black boxes will be used worldwide down the road.

Not only is it expected autonomous vehicles will be safer, but manufacturers will absorb some liability. This could be good news for the future owners of these autos. Their insurance rates will be much lower, if they need insurance at all. Insurers, on the other hand, might run out of customers in 2050.

The future of technology is now outpacing not only the law but the imagination of consumers. Remaining industry questions include:

  • Will car insurance policies vanish altogether?
  • Will a special auto insurance policy for autonomous vehicle owners become common? What would that look like?
  • Will Volvo and other manufacturers regret their offer after getting hit with hundreds of lawsuits come 2020?
  • Will self-driving cars be available through rideshare and car services? Privately?
  • Will the US continue to lead the pack in autonomous vehicles, or will regulations curb progress?
  • What affect will self-driving cars have on traffic and congestion?