Automation is transforming the workplace, and it is now prompting significant regulatory changes. Starting October 1, 2025, California will enforce new regulations governing automated decision-making systems (ADS) used in employment settings. Employers whose algorithms, resume-screening software, or AI-based promotion tools lead to discriminatory outcomes, whether intentional or not, may be subject to fines, lawsuits, or other penalties. Proactive compliance is essential.
What counts as ADS?
Under California’s new rules (Cal. Code Regs. § 11008.1), ADS are defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit,” such as:
- Resume-screening algorithms
- Chatbots or AI that conduct initial interviews
- Tools ranking job applicants or scoring performance reviews
- Personality testing
- Promotion software that “predicts” leadership potential
Widely used platforms such as Workday, which leverages predictive analytics to inform decisions about promotions and internal mobility, and LinkedIn Recruiter, which relies on algorithmic ranking to identify and prioritize job candidates, are now subject to the same anti-discrimination protections under the Fair Employment and Housing Act (FEHA). If an organization uses an AI tool that considers race, gender, age, disability, religion, or national origin, even indirectly, it could be held liable under the new regulations.
One major concern is ADS that screen out applicants based on availability, disproportionately excluding individuals who observe religious sabbaths or have disabilities, caregiving obligations, or medical restrictions. Similarly, video interview tools that evaluate reaction time, vocal tone, or facial expressions may disadvantage candidates with physical or neurological disabilities. Under the new rules, such practices are presumed unlawful unless they are job-related, consistent with business necessity, and employers offer reasonable accommodation or alternative assessments.
The real-world implications of algorithmic bias are increasingly evident in employment practices. Concerns, including legal challenges, have been raised about automated applicant screening tools potentially resulting in systemic discrimination against job seekers based on characteristics such as age, race, or disability. These issues underscore how the data used to build such systems can unintentionally reinforce patterns of exclusion, even without deliberate (human) intent.
Who’s covered?
The regulations apply to:
- Any business or enterprise that regularly employs five or more individuals (including part-time and seasonal workers)
- State and local government entities, counties, cities, and districts
- Employment agencies, labor organizations, and apprenticeship programs
- Nonprofit corporations and associations (with limited exceptions for religious institutions)
- Agents acting on behalf of employers, such as third-party vendors or software providers who supply or operate automated-decision systems
Employers are also held responsible for the actions of their vendors.
What’s required of employers?
- Maintain detailed records for four years, including applications, personnel files, promotions, selection criteria, and all ADS-related data.
- Retain third-party data. If a vendor screens resumes on an employer’s behalf, these records are the employer’s responsibility.
- Preserve records longer if a complaint is filed. Everything must be held until the matter is resolved.
Similar ADS regulations
- New York City’s Local Law 144, already in effect, requires employers using ADS in employment decisions to conduct independent bias audits, publish results, and provide advance notice and opt-out options to candidates. Enforcement is managed by the Corporation Counsel, with complaints directed to the New York City Commission.
- Illinois has amended its Human Rights Act, effective January 1, 2026, to prohibit employers from using ADS in ways that discriminate in employment decisions based on protected characteristics.
- Colorado’s AI Act, effective February 1, 2026, is the most comprehensive ADS regulation, as it imposes a duty of reasonable care on developers and deployers of high-risk ADS. Furthermore, the act creates transparency requirements, annual impact assessments, and risk disclosures.
The clock is ticking
Employers should act promptly by thoroughly auditing all hiring, promotion, and evaluation tools that rely on AI or automation, engaging vendors to understand how automation systems are built and tested for bias, and documenting records of all ADS-related activity. Even informal or partial reliance on automation can create liability.
With the October 2025 enforcement date approaching, organizations must be prepared to demonstrate that their technology complies with the law. Employers who fail to comply with California’s new ADS regulations face substantial consequences, including fines, court orders, civil liability for damages, and significant reputational harm, especially if biased outcomes are exposed through litigation or public scrutiny. Importantly, intent is irrelevant; even unintentional discrimination can trigger enforcement.
Nixon Peabody has employment law attorneys throughout California who can help advise employers on California Labor Law and on implementing policies to address your company’s specific needs.