Effective October 1, 2025, new California regulations make explicit what was already implicit: the Fair Employment and Housing Act (FEHA)’s anti-discrimination rules fully apply to “automated-decision systems” (ADS) used in employment. That includes not only sophisticated artificial intelligence tools, but any computational process that helps decide who gets hired, promoted, disciplined, or otherwise receives an “employment benefit.”
Plaintiffs’ lawyers and regulators are likely to scrutinize any AI-focused framework that expressly calls out testing, documentation, and vendor tools. Employers who use AI, algorithms, or automated screening in their HR processes should assume these regulations will create a new litigation and enforcement playbook—and adjust accordingly.
What Counts as an “Automated-Decision System”?
The regulations define an “Automated-Decision System” broadly as a computational process that makes a decision or facilitates human decision-making regarding an employment benefit. It may be based on artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques.
In practice, this includes:
- Online or app-based screening tools that score applicants or employees
- Résumé parsers and ranking tools
- Video or audio interview analytics and “personality” scoring
- Targeted job advertising that determines which users see which postings
- Third-party assessment platforms and background data analytics
- Other similar tools and programs
By contrast, ordinary IT tools that do not themselves make or guide employment decisions—such as word processors or basic email—are not treated as ADS. The focus is on systems that are used as “selection criteria” for employment decisions, or those which inform humans who make these decisions.
Anti-Discrimination Laws Explicitly Applied to ADS
The new regulations confirm that it is unlawful for an employer or other covered entity to use an ADS or other selection criteria (tests, qualification standards, proxies, etc.) that discriminate on any FEHA-protected basis. That includes traditional disparate treatment, but also disparate impact—where seemingly neutral criteria disproportionately screen out protected groups.
The new rules also introduce a new definition of “proxy”: a characteristic or category closely correlated with a basis protected by FEHA. This is aimed squarely at the concern that automated tools may use data points (for example, geography, certain educational histories, or other attributes) that function as stand-ins for race, disability, age, or other protected characteristics.
Vendors, “Agents,” and Employment Agencies
The regulations expand how “agents” and “employment agencies” are defined. An agent now includes any person acting on behalf of an employer to perform functions traditionally exercised by the employer—such as recruitment, screening, hiring, promotion, or decisions on pay, benefits, or leave—including when those activities are conducted through use of an ADS.
Similarly, an employment agency includes entities that procure applicants, employees, or work opportunities using ADS. The practical takeaway is twofold:
- Third-party vendors who provide AI-driven HR tools may themselves be treated as covered entities.
- Employers are unlikely to avoid FEHA exposure by outsourcing decision-making to a vendor tool; if the tool is discriminatory, both the vendor and the employer can be implicated.
Testing, Documentation, and Record Retention
Two aspects of the new framework are especially important for litigation risk:
- Anti-bias testing as evidence. The regulations make “evidence—or the lack of evidence—of anti-bias testing or similar proactive efforts” relevant to any claim or defense involving an ADS. Factors such as the testing’s quality, scope, recency, results, and how the employer responded to those results can all be examined. In other words, not testing at all may itself become part of the plaintiff’s case.
- Four-year The new rules extend the period for preserving personnel and employment records from two to four years. Employers and other covered entities must preserve all applications, personnel, membership records, employment referral records, selection criteria, and automated-decision system data, plus complaint-related records (expressly including ADS data), for four years from the later of:
- the date the record was made, or
- the date of the personnel action
For employers using AI or algorithmic tools, this significantly raises the bar for data governance and documentation.
Practical Implications for Employers Using AI and Algorithms
For California employers, the new rules mean that:
- ADS use is no longer “experimental” from a legal It is squarely within FEHA’s enforcement framework.
- Blind reliance on vendor assurances is Employers should expect to show what they did to understand and mitigate potential bias, not just that the vendor said its tool was compliant.
- Data and design choices Features that may be defensible in a purely technical sense (for example, learning from historical hiring data) may be problematic if they perpetuate existing disparities.
- Complaints about automated decisions may be treated like any other discrimination complaint. Handling them through existing EEO processes—now with specific attention to ADS—is essential.
What Employers Should Do Now
Given the scope of the regulations and the likely increase in AI-related employment claims, California employers should consider taking the following steps:
- Inventory ADS Use
Identify where any automated tools, algorithms, or AI systems are used in recruiting, screening, promotion, discipline, scheduling, compensation, or other employment decisions. - Evaluate Vendors and Contracts
Review contracts with ATS providers, assessment vendors, background-screening platforms, and others. Clarify each party’s responsibilities for anti-bias testing, cooperation with investigations, and data retention. - Implement Anti-Bias Testing and Review
Conduct and document reasonable testing for potential disparate impact on FEHA-protected groups, appropriate to the tool’s scale and risk. Where issues are identified, record how the tool was modified, replaced, or supplemented with human review. - Update Policies and Record-Retention Practices
Align internal recordkeeping schedules with the four-year requirement and ensure ADS-related data (inputs, outputs, scoring criteria, and testing records) are preserved appropriately. - Train HR and Management
Provide targeted training to HR, recruiters, and managers on when a tool may be considered an ADS, how to use it appropriately, and how to respond to concerns about algorithmic decisions. - Establish an Escalation Path
Ensure there is a clear process for employees or applicants to raise concerns about automated decisions and for those concerns to be reviewed by a knowledgeable human decision-maker.
California is not banning AI in the workplace, but it is signaling that automated tools will be judged by the same anti-discrimination standards as any other selection criteria—and that employers will be expected to show thoughtful, documented steps to prevent bias.
JMBM’s Labor & Employment attorneys counsel businesses and management on workplace issues, helping to establish policies that address problems and reduce job-related lawsuits. We act quickly to resolve claims and aggressively defend our clients in all federal and state courts, before the Department of Labor, the NLRB, and other federal, state and local agencies, as well as in private arbitration forums. We represent employers in collective bargaining negotiations and arbitration. If you have questions or need guidance on how these changes may affect your business, please contact a JMBM attorney.
Related articles from this series:
Los Angeles Real Estate Litigation Lawyer Jeffer Mangels Butler & Mitchell LLP Home