JPM San Francisco 2026 Read Briefing
US State AI Laws

California AI Laws: Training Data, Employment AI, and Political Deepfakes

California regulates AI across more domains than any other US state—training data transparency, employment discrimination, bot disclosure, and political advertising all have active laws here.

12 min read January 2026 Official Sources

Executive Summary

California regulates AI across training data, employment, consumer disclosure, and political advertising—more domains than any other US state. AB 2013 (effective January 2026) requires generative AI developers to publicly disclose training data information. The Civil Rights Council’s employment AI rules (effective October 2025) make employers liable for AI-driven discrimination even without intent.

SB 1001 (2019) requires bots to disclose their artificial identity, while AB 2355 (2025) mandates AI disclosure in political advertising. The ambitious SB 1047 frontier AI safety bill was vetoed by Governor Newsom in September 2024, but its concepts continue to influence national AI policy discussions.

California's deepfake laws targeting platforms (AB 2655) and distribution (AB 2839) have faced legal challenges—blocked and struck down respectively on First Amendment and Section 230 grounds—highlighting the constitutional complexities of AI content regulation.

AB 2013: Training Data Transparency

Signed September 28, 2024 and effective January 1, 2026, AB 2013 is the first US law requiring generative AI developers to publicly disclose training data information. It applies retroactively to systems released or substantially modified since January 1, 2022.

Required Disclosures

Developers must publicly disclose:

  • Description of datasets: How they further the AI system’s purpose
  • Number of data points: Scale of training data
  • IP content: Whether datasets include copyrighted, trademarked, or patented data
  • Data acquisition: Whether datasets were purchased or licensed
  • Personal information: Whether datasets contain personal or aggregate consumer information
  • Processing: Any cleaning, processing, or modification to datasets

Who’s Covered

  • Developers of generative AI systems
  • Systems available to California residents
  • Retroactive to January 1, 2022

Trade Secret Challenge

Companies must balance transparency with proprietary information protection. While trade secrets are not explicitly exempted, the disclosure requirements focus on categories and characteristics rather than specific dataset contents.

Employment AI Discrimination Rules

The California Civil Rights Council approved AI employment regulations on June 27, 2025, effective October 1, 2025. These rules apply the Fair Employment and Housing Act (FEHA) to Automated-Decision Systems (ADS) used in employment decisions.

Key Requirements

Liability Standard

  • Unlawful to use ADS resulting in discrimination
  • Liability even without discriminatory intent
  • Disparate impact creates liability

Defense & Evidence

  • Anti-bias testing can be used as defense
  • Absence of testing can be evidence against
  • Retain ADS records for 4 years

Record Retention (4 Years)

Selection Criteria

How ADS evaluates candidates

Outputs

Decisions and recommendations

Audit Findings

Bias testing results

Political Ads & Bot Disclosure

SB 1001

Bot Disclosure • Effective July 2019

  • Platforms with 10M+ monthly US users
  • Unlawful to deceive about artificial identity
  • Covers commercial transactions & elections
  • "Clear, conspicuous" disclosure required

AB 2355

Political AI Ads • Effective Jan 2025

  • Committees with $2,000+ contributions
  • Required disclaimer on AI-generated ads
  • "Generated or substantially altered using AI"
  • FPPC enforcement

Deepfake Laws: Legal Challenges

AB 2655 (Blocked)

Required platforms to remove deceptive election content. Blocked by federal court January 3, 2025 through June 28, 2025 on Section 230 preemption grounds.

AB 2839 (Struck Down)

Prohibited distribution of deceptive AI content near elections. Struck down October 2024 as violating First Amendment—judge ruled it "hinders humorous expression."

SB 1047: The Vetoed Frontier AI Bill

Vetoed September 29, 2024

Governor Newsom vetoed SB 1047, stating it "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data." Despite the veto, the bill's concepts continue to influence AI policy discussions nationally.

What SB 1047 Would Have Required

Coverage

  • Models costing $100M+ to train
  • Models with 10²⁶+ FLOPs
  • "Frontier" AI models only

Requirements

  • Safety and security protocols
  • Shutdown capabilities
  • Third-party annual audits (from 2026)
  • 72-hour incident reporting
  • Whistleblower protections

Critical harms defined: WMD creation, cyberattacks on critical infrastructure ($500M+ damage), autonomous crimes causing mass casualties. Penalties would have been up to 10% of training computing costs.

Notable Industry Support

Despite industry opposition, SB 1047 had surprising support from within AI companies:

  • xAI CEO Elon Musk publicly supported the bill
  • 113+ employees of OpenAI, DeepMind, Anthropic, Meta, and xAI signed letters of support

Operating AI in California?

GLACIS helps organizations build auditable evidence of responsible AI deployment. Our continuous attestation platform creates verifiable records to support California AI compliance programs.