Emerging Technologies

When a driverless car crashes, who do you blame?

Renault electric city Twizy cars are parked during a presentation of the Wattmobile, a new self-drive Autolib-style electric car service at Gare de l'Est train station in Paris September 18, 2014.

Image: REUTERS/Charles Platiau

Victoria Golshani Shirazi

Every day, our lives more closely resemble the film “Back to the Future”. Granted, our cars are not yet flying, but they are driving autonomously. Our local delivery man is being replaced by a drone. Our home, our car, and even our cities are all going “smart” or are increasingly connected to this “digital world”.

It is great to see these innovations come to life. But with new products also come new risks, and right now they are preventing us from bringing these innovations to a mass market. In July, a self-driving Tesla car was involved in the first fatal crash. Just recently, a drone nearly collided with a passenger aircraft in London, after “months of warnings from aviation officials about the skyrocketing number of near misses between drones and airliners.” In order for us to truly benefit from these new technologies, insurers, policy-makers and technology producers must come together to ensure that liability and risk are properly insured.

In particular, three actions should be taken. First, the liability that comes with AI products should be clearly defined and transparent for all. Second, an internet-enabled network should be set up that allows the sharing of data from connected devices but that doesn’t compromise their security. And third, a globally recognized body should authorize and oversee standards governing the industry.

Let’s start with liability.

It’s clear that products equipped with AI could easily be better and safer than those that aren’t. It’s estimated, for example, that autonomous vehicles will decrease accident rates by 90%, becoming “the greatest health achievement of the century”.

But as these intelligent machines continue to evolve, the question of liability from the accidents they might cause comes into play. Who is liable for an injury caused by an autonomous or artificially intelligent “robot”? Is the owner of the artificially intelligent device still liable if they can no longer control the actions of the device?

Currently, no regulation exists to deliver clear and harmonious (think globally recognized) mechanism of fault for fully automated AI devices. Insurers must therefore work together with technology manufacturers and policy-makers to clearly resolve these gaps in liability ownership. Without clear liability ownership, insurers will be unable to insure individuals, manufacturers or technology designers against these losses.

Second, consider the creation of a secure network for data sharing.

In the age of “cyber-physical systems”, our cars are not only autonomous – they are “smart”, that is to say digitally connected. But digital connection automatically brings the risk of both human error and malicious hacking. In 2015, Chrysler recalled over 1.5 million vehicles after learning the vehicles were vulnerable to wirelessly hacking, allowing a third party to take control of dashboard functions, steering, transmission and brakes.

The threat goes far beyond our vehicles. Over the past few months, the United States and German government systems have been hacked. Key questions cannot be neglected: What if it is not only one car that is hacked but all of them? What if an organization intent on destruction takes control of our digital infrastructure and cuts access to water and electricity?

Insurers have been voicing these systemic and catastrophic concerns at the World Economic Forum’s events on risk mitigation that I organize. They are asking to collaborate with governments, manufacturers and technology designers to clarify data needs and access frameworks, and share data efficiently and effectively. It is a step in the right direction: all stakeholders need to work together to create transparency in the upfront design of emerging technologies in order to enable homogenous safety standards and risk mitigation techniques at every step of new technology development.

Finally, consider the idea of a globally recognized body of standards.

Right now, a clear lack of insurance standards and regulatory requirements mandating insurance coverage inhibits the protection of society against injury and loss. With the advent of the on-demand economy, many sharing platforms are either running without insurance or minimally self-insuring.

Further, no regulatory consistency exists between cities or regions. For example, in Canada, the legality of the sharing economy varies from province to province (in Toronto it’s legal; in Vancouver not).

In a world where the car you hire on a sharing platform easily crosses city, state and national lines, harmonization is critical if we are safeguard the social protections that have for a long time been required in the private sector (i.e. racial discrimination, background checks/licensure verifications, and insurance requirements).

Without mandating insurance requirements or the creation of a single certification body, the driver of a sharing car may not have the insurance needed to drive in a commercial capacity; an individual renting their home out for a few nights may not have the appropriate homeowners insurance coverage to protect their home; and individuals themselves may not have the appropriate healthcare, pension or other regulated insurance that has long protected sellers and users alike.

In conclusion, although these emerging technologies provide society with great benefits, they intrinsically bring along new and more complex risks where severe loss of property and life will result. Fragmented regulatory frameworks, the lack of liability and the lack of common standards have hindered the ability of insurers to develop new products to manage these risks. But by working together, these risks can not only be covered in the future – we might even be able to prevent them from happening in the first place.

We might never go back to the future, as in the film, but at least we’d go back to a world with properly insured risks and liabilities.

This article was written as part of the Mitigating Risks in the Innovation Economy initiative of the World Economic Forum. To learn more about the project, visit its website or e-mail Victoria.Shirazi@weforum.org

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Automotive and New Mobility

Related topics:
Emerging TechnologiesFourth Industrial RevolutionGlobal Risks
Share:
The Big Picture
Explore and monitor how Insurance is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum