Why Recent Court Cases Are Targeting More Than Just Data Privacy Violations

Recent court cases are targeting far more than data privacy violations—they're expanding across employment law, consumer fraud, antitrust issues, AI...

Recent court cases are targeting far more than data privacy violations—they’re expanding across employment law, consumer fraud, antitrust issues, AI copyright infringement, and biometric data collection. This shift reflects a fundamental change in how courts and regulators view consumer protection: no longer confined to how companies handle sensitive information, but extending to how companies treat workers, deceive customers about pricing, use copyrighted materials without permission, and collect personal data through emerging technologies.

In 2025 alone, courts approved 13,000+ class action lawsuits worth a combined $70 billion across the top 10 settlements—a record that signals the broadening scope of litigation. Companies like AT&T, Tinder, Anthropic, and Clearview AI have all faced settlements ranging from millions to billions for violations that go far beyond traditional data privacy breaches.

Table of Contents

What Types of Violations Are Courts Now Targeting Beyond Data Privacy?

The landscape of class action litigation has fragmented into multiple legal territories that extend well beyond the data breach model that dominated headlines for years. Employment law violations, consumer fraud, antitrust conspiracies, AI copyright infringement, and biometric data collection now represent some of the highest-value settlements. A look at 2025’s major cases reveals the shift: AT&T faced an $1.8 million settlement for failing to pay minimum wages and overtime to California employees while denying compliant meal and rest breaks; two McDonald’s operators agreed to pay $3.55 million for failing to compensate employees for short meal breaks under Oregon law; and Tinder settled for $60.5 million—one of the largest consumer fraud settlements in history—for charging users over 29 years old significantly higher prices for premium subscriptions than younger users for identical features. These cases target conduct that has nothing to do with data privacy or cybersecurity, yet they command the same legal machinery and generate comparable payouts.

The expansion reflects courts recognizing that consumer harm occurs in multiple ways, not just through data theft or unauthorized access. A deceptive pricing scheme, a withheld wage payment, or unauthorized facial recognition scanning can injure millions of people simultaneously, making them ideal candidates for class certification. The 68% class certification approval rate in 2025—up from 63% in 2024—suggests courts are becoming more receptive to these broader claims. However, not all consumer complaints qualify: a case must demonstrate that the defendant’s conduct affected a large, identifiable group in a systematic way. Misleading a handful of customers doesn’t meet the threshold, but misleading an entire demographic (as Tinder did with age-based pricing) or systematically denying wage rights to thousands of workers (as AT&T and McDonald’s did) easily crosses that line.

What Types of Violations Are Courts Now Targeting Beyond Data Privacy?

How Employment Law Violations Became a Major Class Action Focus

Wage theft and labor law violations have emerged as one of the highest-value litigation categories, largely because they affect millions of workers in similar ways. Unlike a data breach that might affect some employees and some customers, employment violations typically touch every worker in a particular role or location. The AT&T settlement ($1.8 million) and McDonald’s settlement ($3.55 million) represent the tip of an iceberg: companies across retail, food service, tech, and logistics are facing similar claims for failing to pay overtime, denying meal breaks, or classifying employees as exempt when they should be non-exempt. These cases matter because wage theft is not a data privacy issue at all—it’s a straightforward breach of labor law that has compounded over months or years.

What makes employment litigation particularly vulnerable to class actions is the documentary evidence: timesheets, payroll records, break schedules, and employment contracts all create a paper trail. Unlike data breaches, where damages are speculative and hard to quantify, wage violations have clear calculations: if a worker was owed 5 hours of overtime per week and didn’t receive it for 2 years, the math is determinable. However, if an employee is properly classified as exempt or if local labor laws differ from federal minimums, the claim may fail—which is why courts scrutinize each case’s specific facts carefully. The trend also reflects changing priorities in enforcement: the U.S. Department of Labor and state attorneys general have increased investigations into wage theft, signaling that labor violations are receiving resources and attention comparable to privacy breaches.

Class Action Settlements Growth & Expansion (2025)Total Filings13000[filings], [%], [$ billions], [% of major cases], [% of major cases]Certification Rate68[filings], [%], [$ billions], [% of major cases], [% of major cases]Top 10 Settlement Value70[filings], [%], [$ billions], [% of major cases], [% of major cases]Employment Cases25[filings], [%], [$ billions], [% of major cases], [% of major cases]AI/Tech Cases18[filings], [%], [$ billions], [% of major cases], [% of major cases]Source: Duane Morris Class Action Review 2026, Stinson LLP Litigation Updates 2026

Consumer Fraud and Deceptive Pricing as a Rising Litigation Category

Tinder’s $60.5 million settlement in 2025 for age-based pricing discrimination stands as a landmark case that redefined consumer fraud litigation. The company charged users over 29 years old significantly higher prices for premium subscriptions—the same product, same features, same everything—simply based on age. This wasn’t a data breach or a privacy violation; it was transparent, systemic price discrimination that Tinder disclosed in its terms of service but that users didn’t fully understand applied to them. The settlement affected millions of users, making it one of the largest consumer fraud verdicts ever reached. Similarly, Michael Kors faced a class action for outlet stores using misleading sale offers based on false reference prices—a practice known as anchor pricing, where the “original” price is inflated to make the sale seem more dramatic. Christian Dior settled a class action for a January 2025 data breach by offering $100 without proof of loss, recognizing that consumers suffered harm even if they couldn’t prove they’d experienced identity theft.

The expansion into pricing fraud represents a shift in what regulators and courts consider actionable deception. Historically, false advertising cases required a company to outright lie about a product’s features or quality. Modern deceptive pricing cases, by contrast, focus on whether a company’s presentation (not necessarily an outright lie) would mislead a reasonable consumer about value or fairness. If a retailer marks down a price from $200 to $100 but that $200 was an inflated reference price that few people actually paid, the $100 discount may be considered deceptive even if the math is technically correct. However, this doesn’t mean every price variation or promotional strategy opens a company to litigation—courts distinguish between permissible marketing psychology and actionable deception. Tinder’s case was particularly strong because it involved identical product at different prices, which is harder to justify than price variations based on product differences or distribution channels.

Consumer Fraud and Deceptive Pricing as a Rising Litigation Category

Anthropic’s settlement of $1.5 billion to resolve claims the company used copyrighted books to train AI models without permission represents one of the largest intellectual property settlements in history and opens a new category of class actions: AI washing and unauthorized training data use. The settlement affects authors and publishers whose works were published before 2024 and were used to train Claude AI models. This case is fundamentally different from privacy litigation; it’s about intellectual property rights and the legitimate use of creative work. As more companies develop generative AI tools, questions about which training data they can ethically and legally use have become central to litigation strategy. Some companies, like OpenAI, have settled with news organizations for past training practices while negotiating licenses for future data access.

The Anthropic settlement signals that courts and juries view unauthorized training data use as a legitimate harm deserving compensation, even if no individual user’s data was mishandled. This expands class action litigation into the AI space in ways that go beyond traditional privacy concerns. However, the full scope of AI copyright liability remains unclear: some legal scholars argue fair use may protect training practices, while others contend that commercial AI training without permission constitutes infringement. The settlements suggest that settlement is becoming the default path for many companies rather than waiting for court decisions to clarify the law. For users and content creators, this means that AI tools you use may have involved legal disputes about how training data was sourced—a factor that could influence both company policy and future model development.

Clearview AI’s $51.75 million settlement in 2025 for automatically collecting facial biometric data online without consent represents enforcement of the Biometric Information Privacy Act (BIPA), a 2008 Illinois law that has become the basis for nationwide litigation. Clearview scraped billions of facial images from social media and other online sources without notifying users or obtaining explicit consent, then sold access to law enforcement agencies and private companies. Similarly, Meta/Facebook settled a $50 million case in December 2025 for deceiving users about privacy controls and allowing third-party apps improper access to personal information—a violation that affected millions of California users. Both cases target conduct that doesn’t involve theft or unauthorized access in the traditional sense, but rather systemic deception about what personal data companies are collecting and how they’re using it. Biometric data is legally treated differently from other personal information in many jurisdictions because it’s unique, permanent, and difficult for individuals to change or control once it’s in a company’s database.

If your credit card number is compromised, you can get a new one; if your face is captured in a company’s facial recognition database, you cannot. Courts have recognized this distinction, making BIPA violations among the highest-value settlements. However, the law applies unevenly: BIPA covers biometric data collected by companies, but not all states have equivalent laws, and not all uses of biometric data are prohibited. Law enforcement agencies, for example, may have different legal obligations than private companies. Additionally, if a user consents to biometric data collection—such as by enabling facial recognition on a smartphone—the company generally has legal cover. The key vulnerability is deception: if a company collects biometric data without clear, prominent disclosure, or if it uses biometric data for purposes beyond what users reasonably understood, litigation follows.

Biometric Data Collection Without Consent and Privacy Control Deception

Antitrust and Price-Fixing Conspiracies Reach New Scope

Major beef producers agreed to an $87.5 million settlement for allegations of conspiring to fix beef prices, with consumers who purchased beef in the U.S. during the relevant period eligible to claim. This case exemplifies how antitrust law is being weaponized through class actions to reach consumer harm caused by price-fixing cartels. The beef settlement involves a straightforward economic harm: if producers conspired to keep prices artificially high, every consumer who bought beef paid more than they would have in a competitive market. Unlike employment cases (which affect employees) or product-based fraud cases (which affect customers of specific brands), antitrust settlements can reach broad swaths of consumers because nearly everyone buys food.

Price-fixing class actions have existed for decades, but their application has broadened significantly. Courts now scrutinize more industries for collusion and are willing to certify classes for indirect purchasers—consumers who bought products through intermediaries rather than directly from the defendant. The beef settlement is notable because it represents enforcement of antitrust law through consumer litigation rather than government investigation alone. However, antitrust claims face evidentiary hurdles: proving that producers intentionally conspired (rather than coincidentally raising prices in sync) requires either direct evidence of communication or sophisticated economic analysis. Individual consumers claiming beef purchases must also contend with challenges proving they bought during the affected period, which is why many antitrust settlements offer broad, relatively low-value claims without extensive proof requirements.

Record Litigation Growth and the Trajectory of Class Action Expansion

The sheer volume of class actions filed in 2025 reflects the broadening scope of litigation: 13,000+ lawsuits filed in federal courts alone (averaging 36 new filings per day) with the top 10 settlements exceeding $70 billion in combined value for the first time in history. The 68% class certification approval rate in 2025 signals that judges are increasingly willing to permit large-scale litigation. Driving much of this growth are AI-related lawsuits, including “AI washing” claims in securities fraud and algorithmic discrimination cases targeting hiring, lending, and housing decisions made by artificial intelligence systems. These emerging legal theories suggest that the next wave of class actions will target algorithmic bias and opaque decision-making processes.

Website tracking technologies—pixels, session replay, and analytics tools—are being challenged under the California Invasion of Privacy Act (CIPA) as wiretapping violations, a legal theory that could expose hundreds of websites to litigation. Similarly, AI call analysis platforms are being scrutinized for consent violations when they record and analyze customer service calls without explicit permission. These cases suggest that the expansion of class action litigation is far from complete: as technology evolves, new categories of corporate conduct are being examined through the lens of consumer protection law. The trend appears to be accelerating rather than slowing, meaning more companies across more industries should expect class action exposure for conduct they may not have previously considered litigation-prone. The expansion also reflects a shift in legal culture: courts and regulators increasingly view corporate misconduct through a consumer protection lens, regardless of whether the misconduct involves data, employment, pricing, intellectual property, or algorithmic decision-making.

You Might Also Like

Leave a Reply