New York AI Laws:
Hiring Audits, Frontier Safety, and Financial Services
New York enacted one of the first AI hiring bias audit requirements with NYC Local Law 144, and the RAISE Act (effective 2027) creates some of the strictest frontier AI safety obligations in the US.
In this guide:
Executive Summary
New York regulates AI at city, state, and financial regulatory levels, creating overlapping compliance obligations. NYC Local Law 144 (enforced since July 2023) requires bias audits for AI hiring tools. The RAISE Act (signed December 2025, effective January 2027) establishes specific safety and reporting requirements for frontier AI developers with $500M+ revenue.
The LOADinG Act (2024) makes New York the first state to require oversight of government AI systems. The NYDFS issued comprehensive AI cybersecurity guidance in October 2024, requiring covered financial entities to address AI-related risks in their cybersecurity programs.
A December 2025 State Comptroller audit found significant enforcement gaps in Local Law 144, with only 1 violation identified by the city versus 17 found by auditors—highlighting the gap between regulation and enforcement that organizations should not assume protects them from liability.
NYC Local Law 144: Automated Employment Decision Tools
Enacted in 2021 and enforced since July 5, 2023, Local Law 144 was among the first US laws requiring bias audits for AI-driven hiring tools. It applies to NYC employers and employment agencies using Automated Employment Decision Tools (AEDTs) in hiring or promotion decisions.
Key Requirements
Bias Audit Requirements
- Independent third-party audit required
- Within 12 months before use
- Published annually on website
- Selection rates by gender and race/ethnicity
Candidate Notice Requirements
- 10+ business days notice before AEDT use
- Explain qualifications being assessed
- Offer alternative selection process
- Post audit summary publicly
Impact Ratio Analysis
Audit metrics must include selection rates by gender and race/ethnicity, including intersections. Impact ratios below 80% (the "four-fifths rule") may signal discriminatory bias requiring investigation or remediation.
Example Calculation
If 60% of male applicants are selected vs. 40% of female applicants, impact ratio = 40/60 = 67% (below threshold)
Intersectional Analysis
Must analyze combinations: e.g., Hispanic female selection rate vs. White male selection rate
December 2025 Comptroller Audit Findings
A State Comptroller audit revealed significant enforcement gaps:
- DCWP identified only 1 non-compliance issue; Comptroller found 17 potential violations
- Only 25% of 311 test calls correctly routed to DCWP
- Only 2 AEDT complaints received during audit period
Organizations should not assume weak enforcement protects them—private litigation and reputational risks remain significant.
Penalties
- • $500 for first violation + same-day violations
- • $500–$1,500 for subsequent violations
- • Each day of non-compliance is separate violation
Enforcement
- • NYC Department of Consumer and Worker Protection
- • Complaint-based enforcement primarily
- • No private right of action under LL 144
RAISE Act: Frontier AI Safety
The Responsible AI Safety and Education Act (S6953B/A6453B) was signed by Governor Hochul in December 2025 and takes effect January 1, 2027. It establishes specific safety and reporting requirements for large AI developers and creates a dedicated oversight office within DFS.
Key Requirements
Coverage
- Large AI developers with $500M+ revenue
- Frontier AI models
- Effective January 1, 2027
Obligations
- Create and publish safety protocols
- 72-hour incident reporting
- Oversight office within DFS
Penalties
- Up to $1 million for first violation
- Up to $3 million for subsequent violations
Industry Support
Unlike California's vetoed SB 1047, the RAISE Act received support from major AI companies:
- OpenAI expressed support
- Anthropic expressed support
LOADinG Act: Government AI Oversight
The Legislative Oversight of Automated Decision-making in Government Act (2024) makes New York the first state to require oversight of government AI systems.
Current Coverage
- • State agencies using automated decision systems
- • Decisions affecting individuals
Pending Expansion
- • Local governments
- • Educational institutions
NYDFS AI Cybersecurity Guidance
In October 2024, the New York Department of Financial Services issued comprehensive AI guidance for entities covered by 23 NYCRR Part 500 (the Cybersecurity Regulation). While framed as guidance rather than new requirements, it clarifies expectations for AI risk management within existing cybersecurity obligations.
Key Guidance Areas
Risk Assessment
- Address AI-related risks from own AI use
- Address risks from vendor AI systems
- Maintain AI system inventories
Third-Party Management
- Due diligence on vendor AI protections
- Evaluate vendor AI governance
- Monitor data exposure to public AI
Authentication Requirements (By November 2025)
NYDFS specifically addresses AI-enabled authentication attacks:
- MFA required for all authorized users by November 2025
- Avoid SMS/voice authentication—vulnerable to AI deepfakes
- Use digital certificates or physical security keys instead
Training Requirements
- Annual cybersecurity training must cover deepfakes
- Include AI-enabled social engineering
- Train on AI phishing recognition
Data Management
- Minimize stored NPI (nonpublic personal info)
- Detect unusual queries to AI platforms
- Monitor data exposure to public AI
NYS Government AI Policy (NYS-P24-001)
Effective January 8, 2024, this policy governs AI use by state entities, local governments, and contractors managing AI for the state.
- Complete AI inventory within 180 days, maintain ongoing
- Risk assessment using NIST AI RMF for new/existing systems
- Healthcare providers accessing AI on behalf of state must comply
Pending AI Legislation
NY AI Bill of Rights
A3265 • Introduced January 2025
Proposed consumer rights around AI systems:
- • Right to safe and effective systems
- • Protection against algorithmic discrimination
- • Protection against abusive data practices
- • Right to data agency
- • Right to know when AI is used
- • Right to opt out of AI systems
NY AI Act
S1169A • Pending
Comprehensive AI regulation proposal:
- • Regulate AI to prevent discrimination
- • Require independent audits of high-risk AI
- • Attorney General enforcement
- • Private right of action
Healthcare AI Bills
A 3991 - AI Use in Healthcare
Defines appropriate AI use in healthcare and outlines provider responsibilities
A 3993 - Clinical Algorithm Bias
Bans biased clinical algorithms; encourages health equity through AI review
AI Companion Safety (Effective November 2025)
New requirements for AI companion applications:
- Safety measures and disclosures required
- Mental health features must detect self-harm signals
- Appropriate escalation protocols required
Key Dates
NYC Local Law 144 enforcement begins
AEDT bias audits and candidate notices required
NYDFS AI cybersecurity guidance issued
AI risk assessment and deepfake training expectations
MFA requirements deadline
All users require MFA; avoid SMS/voice authentication
RAISE Act signed
Governor Hochul signs frontier AI safety legislation
RAISE Act takes effect
Safety protocols, incident reporting, oversight office operational
Operating AI in New York?
GLACIS helps organizations build auditable evidence of responsible AI deployment. Our continuous attestation platform creates verifiable records to support New York AI compliance—from LL 144 audit documentation to RAISE Act safety protocols.