Emerging Technologies

Here's how California is approaching the ethics of AI

California's State Capitol building in Sacramento - is this the vanguard for the development of responsible AI?

California's State Capitol building in Sacramento - is this the vanguard for the development of responsible AI? Image: Jeff Turner / Flickr

Brandie Nonnecke
Director, CITRIS Policy Lab, University of California, Berkeley
Jessica Cussins Newman
Research fellow, Center for Long-Term Cybersecurity, UC Berkeley

Amid growing concern over the threat of AI-enabled systems to perpetuate discrimination and bias and infringe upon privacy, California has introduced several bills intended to curb negative impacts. Primary among them are bills related to mitigating the negative impacts of specific AI-enabled technologies such as facial recognition systems. On May 14, 2019, San Francisco became the first major US city to ban the use of facial recognition technology by city agencies and law enforcement. Two months later, the neighbouring city of Oakland implemented similar restrictions.

These may be city-level laws, but their passing has influenced state and federal legislation. In California, a bill called the Body Camera Accountability Act seeks to prohibit the use of facial recognition in police body cameras, while another would require businesses to publicly disclose their use of facial recognition technology. At the federal level, four pieces of legislation are currently being proposed to limit the use of this technology, especially in law enforcement.

In the wake of the EU’s transformative General Data Protection Regulation, California passed the US’ first domestic data privacy law. The California Consumer Privacy Act (CCPA) became law in 2018 and is set to go into effect in January 2020. The CCPA gives consumers the right to ask businesses to disclose the data they hold on them, request deletion of data, restrict the sale of their data to third parties, and sue for data breaches. This Act has made its influence felt at the federal level too, prompting the development of a federal data privacy law. These data privacy laws are particularly relevant to data-dependent fields like AI.

Election integrity

In response to the serious threat that AI-enabled bots and deepfakes pose for election integrity, the California government has pushed forward progressive pieces of legislation that have influenced federal and international efforts. Passed in 2018, the “Bots Disclosure Act” makes it unlawful to use a bot to influence a commercial transaction or a vote in an election without disclosure in California. This includes bots deployed by companies in other states and countries, which requires those companies to either develop bespoke standards for Californian residents or harmonize their strategies across jurisdictions to maintain efficiency. At the federal level, the “Bots Disclosure and Accountability Act” includes many of the same strategies proposed in California. The California “Anti-Deepfakes Bill” seeks to mitigate the spread and impact of malicious political deepfakes before an election and the federal “Deepfakes Accountability Act” seeks to do the same.

Have you read?
Risks and challenges

While California may be leading the implementation of responsible AI governance strategies, ill-conceived laws, especially those that influence similar strategies at federal and international levels, will cause more harm than good. Take for example the “Bots Disclosure Act”; some commentators have decried a lack of clarity in the Act around what is and is not determined to be a “bot” and the roles and responsibilities of parties, especially platforms, to identify and stem the influence of malicious bots. This weakens its implementability and impact. Federal initiatives modeled after California’s law will serve to only further erode accountability and public trust.

There is also the risk that beneficial legislation could become unhelpfully politicized. We are seeing increasing federal pushback against the “California effect,” as exemplified by recent efforts to revoke California’s ability to implement stricter emission standards than federal guidelines. Federal initiatives may seek to curtail the state’s impact on national and international standards for responsible AI governance. This is already being witnessed in federal efforts to preempt the CCPA.

Getting it right

California is quickly pushing forward AI legislation, ranging from oversight over discrimination and bias to protecting privacy and election integrity. California’s progressive AI legislation has already had a marked influence on federal efforts, and will likely have global reach if California-based AI companies, including Google, Facebook, and OpenAI, alter their practices. The state has an opportunity and obligation to lead the way in establishing effective standards and oversight that ensures AI systems are developed and deployed in a safe and responsible manner. California can provide guidance on responsible AI governance for the rest of the country and the world, but caution must be taken to implement due diligence in identifying and mitigating any negative impacts before it’s too late.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Digital Communications

Share:
The Big Picture
Explore and monitor how Digital Communications is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum