A federal court is allowing a class action lawsuit against Workday, Inc. to proceed with claims that the company’s AI-powered hiring tools systematically screened out job applicants age 40 and older, violating the Age Discrimination in Employment Act (ADEA) and contradicting Equal Employment Opportunity Commission (EEOC) guidance on algorithmic fairness. In May 2025, U.S. District Judge Rita F.
Lin granted preliminary certification of the collective action, confirming that the court found sufficient evidence to allow the case to move forward—and in January 2026, when Workday tried to get the disparate impact claims dismissed, Judge Lin denied that motion, keeping the age discrimination allegations in play. The lawsuit centers on Workday’s AI screening tools, which the company’s own filings show rejected 1.1 billion applications during the relevant period, and alleges that these tools, regardless of whether they explicitly reference age, produce outcomes that disproportionately harm older workers. The lead plaintiff is Derek Mobley, a Black applicant over 40 who applied for more than 100 positions through the Workday platform since 2017 and was rejected every single time. He is joined by four other plaintiffs, all age 40 or older, who collectively submitted hundreds of job applications and experienced consistent rejections. The case has drawn attention from the EEOC itself, which filed an amicus brief supporting the plaintiffs’ argument that Workday’s hiring algorithms engage in unlawful disparate impact discrimination—a legal theory that focuses on the actual effects of a system, not necessarily the intent behind it.
Table of Contents
- How Does Workday’s AI Hiring Tool Allegedly Discriminate Against Older Workers?
- What Is Disparate Impact, and Why Does It Matter in Algorithmic Hiring?
- What Does the Court’s Decision to Allow the Case to Proceed Mean?
- Who Can Participate in This Class Action, and What Are the Practical Steps?
- What Is Workday’s Defense, and Why Might It Succeed Despite the Court’s Rulings So Far?
- What Does the EEOC’s Support Mean for This Case?
- What Happens Next, and What Could the Outcome Be?
How Does Workday’s AI Hiring Tool Allegedly Discriminate Against Older Workers?
Workday’s AI-powered recruiting platform uses machine learning algorithms to screen job applications and rank candidates. According to the lawsuit, these tools function as an automated gatekeeper that disproportionately filters out applicants in the 40-plus age group, even though Workday argues that its AI system is not programmed to see, use, or consider age as a factor in its decision-making. The disparate impact framework in employment law doesn’t require proof of intentional discrimination; instead, it focuses on whether a hiring practice produces a significantly different rate of rejection for a protected group. In this case, the plaintiffs argue that Workday’s algorithms, whether through proxy variables (like years of work history, education institution graduation dates, or technology platform familiarity) or other opaque feature interactions, end up systematically disadvantaging older applicants. Derek Mobley’s experience illustrates the scale of the problem. Despite applying to over 100 positions through Workday’s platform since 2017—positions that would reasonably match his qualifications based on job descriptions—he never advanced past the algorithm’s initial screen.
This is not a case of rejection after an interview or after human review; it is blanket rejection at the automated stage. The EEOC’s decision to file an amicus brief in support of the plaintiffs underscores the federal government’s view that this type of algorithmic screening can violate civil rights law if the outcomes show a pattern of exclusion based on protected characteristics. However, establishing disparate impact in the context of AI is legally and factually complex. Workday’s defense rests on the argument that its AI tools are genuinely blind to age and other protected characteristics—that they are trained on job performance data, not demographic data, and therefore should not produce age-related bias. The company maintains that if its algorithms reject older applicants at different rates, it may be because older applicants, as a group, differ in ways the algorithm legitimately considers (such as technical skill sets, career trajectory patterns, or educational background). This distinction between protected discrimination and legitimate differentiation will be central to the case as it moves toward trial or settlement.

What Is Disparate Impact, and Why Does It Matter in Algorithmic Hiring?
Disparate impact is a legal doctrine that emerged from civil rights law and holds that even facially neutral policies can violate the law if they produce systematically different outcomes for protected groups. In traditional hiring, a classic example might be a height requirement that is not explicitly based on sex but disproportionately screens out women. In the context of AI, disparate impact means that an algorithm designed without explicit references to a protected characteristic (like age) can still violate the law if its outputs show a statistically significant pattern of exclusion. The Age Discrimination in Employment Act protects employees and job applicants age 40 and older from discrimination in hiring, firing, pay, job assignments, and other terms and conditions of employment. The EEOC, which enforces the ADEA, has issued guidance stating that employers are responsible for the discriminatory effects of their hiring tools, even if those tools are powered by AI and operated by third-party vendors like Workday.
The EEOC’s position is that algorithmic neutrality in code does not automatically equal legal compliance; what matters is the real-world impact on applicants in protected classes. This approach is grounded in the idea that civil rights protections should not be easily circumvented through technological opacity or claims that “the algorithm is objective.” However, there is an important limitation: not every difference in rejection rates constitutes unlawful disparate impact. The law generally requires a showing of a “significant” difference—often measured by the “80 percent rule,” which suggests that if a practice is applied at a rate less than 80 percent for one group compared to another, it may signal illegal disparate impact. The plaintiff’s burden is to establish that the difference is statistically significant and not explained by legitimate factors. Workday will likely argue that any differences in rejection rates can be justified by the legitimate hiring criteria its algorithm uses and the actual distribution of qualifications among applicants. The court’s willingness to allow this case to proceed to discovery suggests that Judge Lin found the plaintiffs’ statistical evidence of disparate impact credible enough to move forward, but the ultimate question of liability will depend on evidence that has not yet been fully presented.
What Does the Court’s Decision to Allow the Case to Proceed Mean?
When Judge Rita F. Lin granted preliminary certification of the collective action in May 2025, she was making a determination that the plaintiffs had satisfied the initial legal threshold to proceed as a group rather than as individuals. This is a significant milestone because it allows all affected applicants age 40 and older—anyone who applied for a job through Workday’s platform since September 24, 2020—to potentially join the lawsuit or be notified of its outcome. Certification does not mean the plaintiffs have won; it simply means the case has enough legal and factual coherence to move forward as a class action. In January 2026, when Workday filed a motion to dismiss the disparate impact claims entirely, it was essentially asking the court to throw out the age discrimination theory before any evidence was presented at trial. Judge Lin’s denial of this motion is particularly important because it means the court found that the disparate impact claim is legally valid and that the plaintiffs have stated sufficient facts to survive a motion to dismiss.
In other words, the court has preliminarily accepted the possibility that Workday’s AI tools could have produced the kind of systematic age-based exclusion the plaintiffs allege. This is a win for the plaintiffs at an early stage, but it is not a finding of liability. The March 7, 2026 opt-in deadline established by the court creates a time-sensitive window for eligible applicants to join the lawsuit. Opting in means becoming part of the collective action and potentially receiving any monetary recovery if the case succeeds, but it also means being bound by the outcome. Applicants who do not opt in by the deadline will not be part of this lawsuit, though they may have independent claims. The fact that the court has authorized formal notice to the class indicates that the litigation has reached a stage where the full scope of affected parties needs to be identified.

Who Can Participate in This Class Action, and What Are the Practical Steps?
To participate in the Mobley v. Workday collective action, an applicant must have been age 40 or older at the time they applied for a job through Workday’s platform, and that application must have been submitted on or after September 24, 2020. The class definition is straightforward in theory but may be harder to verify in practice, since it relies on individuals’ own records of applications and their ability to prove their age at the time of application. Workday’s platform has data on applications, rejection patterns, and demographic information (including age if it was voluntarily provided in applications or profiles), so much of the factual verification will likely depend on Workday’s own records. Applicants who believe they meet the class definition and wish to participate have two main options: they can opt in to the collective action by the March 7, 2026 deadline, or they can wait and see the outcome if they prefer not to take active steps.
Opting in is the more involved process and typically requires submitting a form or declaration confirming that you applied for jobs through Workday and that you were 40 or older when you did so. One practical consideration is whether applicants have documentation of their applications—email confirmations, screenshots, or records from the companies that posted the jobs through Workday. However, Workday’s own records should be discoverable in the lawsuit, so lack of personal documentation may not prevent participation. The comparison between opting in and waiting is important: opting in gives the individual a stake in the outcome and an opportunity to receive any settlement or judgment, but it also ties the individual to the lawsuit’s timeline and result. Waiting means the individual is not part of this case, but they may preserve the right to bring their own independent claim in the future (though statutes of limitation may apply). Given that the EEOC itself has filed a brief supporting the plaintiffs, the case appears to have governmental support, which may increase the likelihood of a favorable outcome.
What Is Workday’s Defense, and Why Might It Succeed Despite the Court’s Rulings So Far?
Workday’s primary defense is that its AI hiring tools are not designed to identify or use age, race, disability, or other protected characteristics. The company argues that its algorithms are trained on job performance and hiring success metrics, not on demographic data. From Workday’s perspective, if the algorithms reject older applicants at higher rates, it is because older applicants, on average, differ in ways that the algorithm legitimately considers—such as technical skills, familiarity with modern software platforms, or career history patterns that the algorithm has learned are correlated with job performance. This defense reflects a broader debate in AI and employment law: Can an algorithm discriminate if it is trained on “neutral” data and never explicitly sees protected characteristics? The answer from the EEOC’s perspective is yes, if the proxies for age (or other protected traits) are embedded in the training data. For example, if the algorithm is trained on performance data from the company’s existing employees, and if those employees are younger on average, then the algorithm may learn to favor characteristics associated with younger workers.
Workday will likely argue that its training methodology is sound and that any disparate impact is unintentional and rooted in genuine differences in applicant qualifications, not in discrimination. However, there is a significant limitation to Workday’s defense if discovery reveals that the company knew (or should have known) about the disparate impact and did nothing to adjust the algorithm. EEOC guidance and recent legal trends suggest that once an employer becomes aware of discriminatory outcomes in an AI hiring tool, the employer has an obligation to investigate and remediate. If the lawsuit uncovers evidence that Workday detected the age-based rejection pattern and chose not to address it, or that the company conducted internal testing and found disparate impact but did not disclose it, that knowledge and inaction could weigh heavily against Workday. The company’s currently public defense—that the algorithm is not trained to identify age—may become harder to maintain if internal emails, testing reports, or expert analyses show otherwise.

What Does the EEOC’s Support Mean for This Case?
The EEOC’s decision to file an amicus curiae (friend of the court) brief on behalf of the plaintiffs is a significant indicator of governmental concern and support. The EEOC is the federal agency responsible for enforcing civil rights laws in employment, and it does not file amicus briefs lightly. By taking this step, the EEOC is signaling to the court that the agency views Workday’s AI hiring tools as a potential violation of the Age Discrimination in Employment Act and that the disparate impact claims are legally sound and meritorious.
This governmental support can influence judicial decision-making, particularly at the preliminary stages where courts are assessing whether claims should proceed. The fact that Judge Lin denied Workday’s motion to dismiss after the EEOC had filed its brief suggests that the court took the agency’s position seriously. However, it is important to note that even strong EEOC support does not guarantee a favorable outcome at trial; the ultimate question of whether Workday’s algorithm actually engaged in unlawful disparate impact will depend on the evidence presented and expert testimony about how the AI system works and what its outcomes show.
What Happens Next, and What Could the Outcome Be?
With the motion to dismiss denied, the case will now proceed to the discovery phase, where both sides will exchange evidence, documents, and data. This is when the plaintiffs’ lawyers will have the opportunity to request detailed information about how Workday’s algorithm works, how it was trained, what data it was trained on, internal testing or auditing reports, and performance metrics comparing rejection rates across age groups. Workday will likewise request evidence from the plaintiffs and may hire its own experts to analyze the algorithm’s behavior. Depending on the strength of evidence that emerges during discovery, the case could be resolved through a settlement, a court judgment, or even an appeal.
If the plaintiffs succeed, damages could include back pay, liquidated damages (which double the compensation in ADEA cases), and injunctive relief requiring Workday to modify or discontinue the discriminatory hiring tool. If Workday prevails, the case could be dismissed, and the company would be cleared of liability. Alternatively, a settlement could result in a monetary recovery for class members, a commitment from Workday to audit and improve its hiring algorithms, or both. The March 7, 2026 opt-in deadline is the first major milestone; future deadlines will likely include discovery cut-off dates, motion deadlines, and a trial date if the case does not settle.
