Upstart Holdings, an AI-powered lending platform, faces multiple legal challenges related to alleged discrimination in its automated loan approval process. The company’s artificial intelligence models have been accused of creating approval disparities for Black applicants and misleading investors about the performance of its Model 22 AI system. A shareholder class action lawsuit filed in April 2026 alleges that Upstart misrepresented how its AI model performed, claiming the model “frequently overreacted to negative macroeconomic signals,” which negatively impacted the company’s revenue and investor returns.
The discrimination concerns around Upstart’s lending practices highlight a critical problem in fintech: AI systems can perpetuate discrimination even when they don’t explicitly include protected characteristics like race. By 2024, Upstart had facilitated over $40 billion in loans through its platform, affecting hundreds of thousands of borrowers across the United States. The issues uncovered raise important questions about AI accountability, fair lending practices, and investor protection in the rapidly growing field of algorithmic lending.
Table of Contents
- What Are The Discrimination Allegations Against Upstart’s AI Model?
- How Did The Fair Lending Monitorship Reveal Approval Disparities?
- What Is The 2026 Securities Lawsuit Alleging About Model 22?
- Who Is Affected by Upstart’s Lending Practices?
- What Are The Limitations In Current Legal Action Against AI Lending Discrimination?
- How Does This Compare to Other AI Lending Discrimination Cases?
- What Should Borrowers and Investors Know Going Forward?
- Conclusion
What Are The Discrimination Allegations Against Upstart’s AI Model?
Upstart’s lending algorithm has been at the center of fair lending concerns since at least 2020, when civil rights groups raised alarms about unintended racial disparities in loan approvals. The company’s AI model considers variables including education level, zip code, and employment history—factors that, while race-neutral on their surface, can serve as proxies for protected classes. For example, borrowers from predominantly minority neighborhoods might face higher rejection rates because their zip codes correlate with lower historical approval patterns, even though zip code itself is not a protected class under fair lending laws.
The company entered into a Fair Lending Monitorship agreement in December 2020 with the NAACP Legal Defense Fund and the student Borrower Protection Center, agreeing to have an independent monitor oversee its lending practices. The final monitorship report documented approval disparities for Black applicants but did not find that specific variables explicitly operated as proxies for protected classes. This creates a gray area in fair lending law: the disparities exist, but proving intentional discrimination or reckless disregard for fair lending is more complicated when no explicitly protected characteristics appear in the algorithm.

How Did The Fair Lending Monitorship Reveal Approval Disparities?
The independent fair lending monitoring process provided the first rigorous, documented examination of Upstart’s loan approval patterns. By analyzing thousands of loan applications, the monitor compared approval rates for applicants with similar creditworthiness but different racial backgrounds. The findings showed meaningful disparities in approval rates for Black applicants compared to similarly situated white applicants, raising serious concerns about the practical effect of Upstart’s model on minority borrowers. One limitation of the monitorship finding is important to understand: the monitor concluded that the disparities did not result from variables operating as proxies for race, meaning they could not definitively prove which specific input variables were causing the disparity.
This distinction matters legally. Fair lending law recognizes two types of discrimination: intentional discrimination (explicitly using race) and disparate impact (using neutral criteria that have a discriminatory effect). Upstart’s case appears to fall into a gray zone where clear disparities exist but pinpointing the discriminatory mechanism is more difficult. The monitorship agreement’s conclusion suggests the issue is systemic to how the model was trained and designed, not just the variables selected.
What Is The 2026 Securities Lawsuit Alleging About Model 22?
In April 2026, shareholders filed a class action lawsuit in the Northern District of California alleging that Upstart misrepresented the performance and reliability of its Model 22 AI system to investors. The lawsuit covers investors who purchased Upstart securities between May 14, 2025, and November 4, 2025—a period when the company publicly discussed the model’s capabilities without disclosing its significant weaknesses. According to the complaint, Model 22 “frequently overreacted to negative macroeconomic signals,” leading the model to reject loans that would have been approved under Upstart’s prior systems.
This securities lawsuit differs from fair lending discrimination claims in a key way: it focuses on investor protection and disclosure obligations, not consumer harm. The lawsuit alleges that Upstart executives knew Model 22 had serious performance problems but failed to disclose this information to shareholders, allowing investors to make decisions based on incomplete information. If Upstart had disclosed that the new model was underperforming and overreacting to market conditions, institutional investors and stock-buying employees would have had material information affecting the stock’s value. The case highlights how AI problems in lending can harm not just consumers but also the investors and employees of companies deploying these systems.

Who Is Affected by Upstart’s Lending Practices?
Upstart operates as a lending platform that partners with banks and credit unions to approve consumer loans. The company does not lend directly to most borrowers—instead, it provides the AI model that lenders use to make approval decisions. This means borrowers affected by Upstart’s discrimination concerns may have been rejected for loans by banks and lenders using Upstart’s technology, without realizing that an algorithm trained by Upstart was behind their rejection. For consumers, the practical impact is significant but often invisible.
A borrower denied a personal loan or charged a higher interest rate based on Upstart’s model might assume the decision was based on traditional credit factors like income and credit score. In reality, Upstart’s algorithm may have weighted factors like zip code or education level in ways that disadvantaged minority borrowers. By 2024, with over $40 billion in loans processed through Upstart’s platform, hundreds of thousands of borrowers were affected. The harm is distributed across the country and difficult to quantify without access to Upstart’s internal data, which is why class action litigation is often the primary mechanism for affected borrowers to seek redress.
What Are The Limitations In Current Legal Action Against AI Lending Discrimination?
The fair lending laws and consumer protection statutes that apply to Upstart’s practices have significant limitations when dealing with AI systems. The Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) were written in an era of human decision-making and simple statistical models. Proving discrimination under these laws requires showing either that a company used a protected characteristic directly or that it used variables as intentional proxies for protected classes. Modern AI systems trained on historical lending data create disparities in ways that are mathematically complex and difficult to attribute to specific decisions by company leadership.
Another critical limitation: even with access to data showing approval disparities by race, plaintiffs must still prove either intentional discrimination or reckless disregard for fair lending compliance. Upstart’s monitorship agreement suggests the company was making good-faith efforts to monitor for discrimination, which could be used as a defense against claims of reckless behavior. The legal standard for proving AI-driven discrimination has not yet fully evolved in case law, leaving borrowers and regulators with outdated tools to address a new problem. This is why the April 2026 securities lawsuit may prove more straightforward than fair lending claims—misleading investors about a product’s performance is easier to prove than proving discrimination when no explicit racial data appears in the algorithm.

How Does This Compare to Other AI Lending Discrimination Cases?
Upstart’s situation is not unique, though it is one of the most prominent cases of AI lending discrimination to date. In July 2025, the Massachusetts Attorney General settled with Earnest Operations, a student loan servicer, for $2.5 million due to unlawful practices involving AI use in lending decisions. The Earnest settlement demonstrates that regulators are increasingly willing to enforce fair lending laws against AI systems, even when the discrimination mechanisms are not straightforward.
The $2.5 million penalty in the Earnest case provides a benchmark for potential settlements in Upstart-related litigation, though the scope and impact of Upstart’s practices is significantly larger. The Earnest case and Upstart’s fair lending monitorship both reveal a pattern: AI lending discrimination cases often involve companies making good-faith efforts to comply with fair lending laws while deploying systems that still produce racially disparate outcomes. This is different from intentional discrimination, but the harm to consumers is equally real. As more cases settle and courts develop clearer standards, companies deploying AI lending systems will face increasingly stringent compliance requirements and exposure to class action liability.
What Should Borrowers and Investors Know Going Forward?
As of early 2026, the legal landscape for AI lending discrimination is rapidly evolving. Multiple pathways exist for affected borrowers: class action litigation in federal court, regulatory complaints to the Consumer Financial Protection Bureau (CFPB) or state attorneys general, and monitoring of fair lending settlements. For investors, the April 2026 securities lawsuit against Upstart demonstrates that AI performance problems can have material financial consequences and may trigger disclosure obligations and securities litigation.
The Upstart cases serve as a warning sign for the broader fintech and AI lending industry. Companies cannot rely solely on historical monitoring agreements or good-faith efforts to shield themselves from liability if their AI systems produce documented discrimination. The convergence of fair lending concerns, securities fraud claims, and regulatory enforcement suggests that the true cost of AI lending discrimination—in settlements, penalties, and reputational damage—is only beginning to become apparent.
Conclusion
Upstart Holdings faces serious legal challenges on multiple fronts: fair lending complaints stemming from approval disparities for Black applicants, a December 2020 monitorship agreement that documented those disparities, and an April 2026 shareholder securities lawsuit alleging that the company misled investors about Model 22’s performance. Over $40 billion in loans have been processed through Upstart’s platform, potentially affecting hundreds of thousands of borrowers nationwide. The fair lending concerns reflect a fundamental challenge in AI regulation: discrimination can occur through complex algorithmic systems without explicit use of protected characteristics.
If you believe you were denied a loan or charged higher interest rates due to Upstart’s lending practices, you may have legal options. Affected borrowers should monitor developments in the April 2026 securities litigation and consider filing complaints with the CFPB or their state’s attorney general. The emerging legal standards around AI lending discrimination will likely lead to settlements, policy changes, and new regulatory requirements that reshape how companies deploy AI in credit decisions.
