The Challenge
AI in Financial Services: Transformative Potential, Unprecedented Risk
Financial services is rapidly deploying AI across credit decisioning, fraud detection, trading, risk assessment, and customer service. These applications directly impact consumers, markets, and financial stability. A credit model that drifts, a fraud system that misclassifies, or a trading algorithm that behaves unexpectedly can generate regulatory action, financial losses, and reputational damage at scale.
Regulators recognize this. The Federal Reserve, OCC, and FDIC apply SR 11-7 principles to AI, machine learning, and generative AI, raising expectations around explainability, bias mitigation, and transparency. State regulators and the CFPB actively enforce fair lending laws against algorithmic discrimination.
Yet traditional model risk management practices—designed for static, explainable models—struggle with the continuous learning, complexity, and velocity of modern AI systems.
Why Traditional Model Risk Management Falls Short
Three Critical Gaps:
Traditional validation occurs before deployment and periodically thereafter. But AI models make millions of decisions between validations, potentially drifting from validated behavior or encountering edge cases never tested.
Complex AI models are difficult to explain. When a regulator or consumer asks why a credit decision was made, point-in-time documentation may not capture the actual model state at decision time.
Banks increasingly use third-party AI services. The TPRM Guidance requires oversight of external AI, but proving vendor model compliance requires evidence beyond vendor attestations.
"Model risk occurs primarily for two reasons: (1) a model may have fundamental errors and produce inaccurate outputs; (2) a model may be used incorrectly or inappropriately."
— SR 11-7 Model Risk Management Guidance
Regulatory Landscape
SR 11-7: Model Risk Management Guidance
What it is: Federal Reserve and OCC supervisory guidance establishing the framework for model risk management at banking organizations.
AI-Specific Provisions:
- Applies to all models including AI and machine learning
- Requires validation of model accuracy and reliability
- Mandates ongoing monitoring and documentation
- Covers third-party and vendor models
ECOA and Fair Lending
What it is: The Equal Credit Opportunity Act and related fair lending laws prohibit discrimination in credit decisions.
AI-Specific Provisions:
- AI credit decisions must not discriminate on prohibited bases
- Adverse action notices must provide specific, accurate reasons
- Disparate impact liability applies to algorithmic decisions
- Sample checklists insufficient—must reflect actual model factors
State-Level AI Regulations
| State | Law | Key Requirement | Effective Date |
|---|---|---|---|
| Illinois | UDAP Amendment | Bars predictive analytics using race/ZIP code in credit scoring | In effect |
| Colorado | AI Act | Developer documentation on bias evaluation | 2026 |
| California | AI Transparency Act | Training data disclosure for generative AI | January 2026 |
| New York | LL 144 | Bias audits for automated employment decisions | In effect |
Use Cases
Credit Decisioning AI
AI systems that evaluate creditworthiness, approve/decline applications, set credit limits, and determine pricing. Includes traditional credit scoring augmentation, alternative data models, and real-time underwriting.
Compliance Requirements:
- SR 11-7 model validation and monitoring
- ECOA fair lending compliance and bias testing
- Adverse action explanation requirements
- State-specific restrictions (Illinois, Colorado)
"The Bureau noted that courts have already held that an institution's decision to use algorithmic or machine-learning tools can itself be a policy that produces bias under the disparate impact theory of liability."
— CFPB Guidance on Adverse Action Notices (2024)
How GLACIS Addresses This:
- Per-decision attestation: Cryptographic proof each credit decision followed approved policies
- Bias monitoring: Continuous tracking of decision patterns across protected classes
- Model state verification: Confirm model version and parameters at each decision
Fraud Detection AI
AI systems that identify fraudulent transactions, account takeover attempts, and suspicious activity. Operates in real-time on high transaction volumes.
Compliance Requirements:
- SR 11-7 accuracy and performance requirements
- False positive/negative rate monitoring
- Customer impact assessment
- Model update governance
How GLACIS Addresses This:
- Real-time monitoring: Track false positive/negative rates continuously
- Performance baselines: Alert on deviation from validated accuracy
- Sub-50ms latency: No impact on transaction processing speed
Trading and Investment AI
AI systems for algorithmic trading, portfolio optimization, robo-advisory, and investment recommendation.
Compliance Requirements:
- SEC/FINRA suitability and best interest requirements
- Model risk management for trading algorithms
- Disclosure of AI use in advisory services
- Performance attribution and reporting
How GLACIS Addresses This:
- Parameter enforcement: Ensure trading AI operates within risk limits
- Recommendation logging: Document each recommendation with rationale
- Compliance evidence: Generate evidence for SEC/FINRA requirements
Evidence & Attestation
What Financial Services Buyers Require
- SOC 2 Type 2: Security controls attestation
- Model documentation: Model cards, validation reports, performance metrics
- Regulatory alignment: Evidence of SR 11-7, fair lending compliance
- Audit trails: Complete logs of model decisions and parameters
- Third-party oversight: TPRM-aligned vendor governance evidence
GLACIS Evidence Types
| Evidence Type | Description | Regulatory Mapping |
|---|---|---|
| Per-decision attestation | Cryptographic proof each decision followed policy | SR 11-7 model monitoring |
| Model state logging | Record of model version/parameters at each decision | SR 11-7 documentation |
| Performance dashboards | Continuous accuracy and fairness metrics | SR 11-7 ongoing monitoring |
| Bias monitoring | Decision patterns across protected classes | ECOA fair lending |