This Meta and YouTube trial could fundamentally reshape how courts treat social media companies’ legal responsibility for user harm. For the first time, a jury is deciding whether algorithmic design choices—infinite scroll, auto-play features, personalized recommendations—constitute actionable company conduct rather than protected editorial decisions. If plaintiffs win, social media platforms may face liability similar to physical product defects, opening the door to hundreds more lawsuits currently waiting in the wings. The case centers on a 20-year-old plaintiff identified as “Kaley,” who began using YouTube at age 6 and Instagram at age 9, claiming the platforms’ design features exacerbated her depression and led to self-harm.
This article explains what’s at stake in the trial, why legal experts call it a bellwether, and how a verdict could reshape social media litigation across the country. The trial began on January 27, 2026, in Los Angeles, with jury deliberations entering their fifth day as of March 23, 2026, after four weeks of testimony. This isn’t a one-off case—it’s the first major trial to emerge from an MDL (multidistrict litigation) encompassing over 1,700 cases and 2,407 total claims filed since 2022. TikTok and Snapchat have already settled before trial, signaling how seriously the platforms took the evidence. Now Meta and YouTube face the jury, with CEO Mark Zuckerberg and Instagram Head Adam Mosseri expected to testify about internal company decisions.
Table of Contents
- Why This Trial Sets a Legal Precedent for Social Media Accountability
- The Plaintiff’s Story and the Evidence of Platform Design
- Why TikTok and Snapchat Settled Before Trial
- The Alleged Addictive Features at the Center of the Trial
- What Meta and YouTube Face If They Lose
- Why This Bellwether Trial Matters for the Broader Litigation Landscape
- What Comes Next and How This Shapes the Future of Social Media Regulation
Why This Trial Sets a Legal Precedent for Social Media Accountability
The breakthrough legal theory at the heart of this case comes from Judge Carolyn Kuhl’s November 5, 2025 ruling, which established a “conduct-versus-content” distinction that could bypass Section 230 of the Communications Decency Act—the law that has shielded social media companies from liability for user-generated content for decades. This ruling says that a platform’s algorithmic design choices, notification systems, and engagement-maximizing features count as company conduct, not protected speech. In practical terms, Meta’s decision to implement infinite scroll or YouTube’s choice to auto-play the next recommended video is now treated more like a car manufacturer’s design choice than a newspaper’s editorial decision. This matters because Section 230 has been nearly impenetrable protection. For over 25 years, courts dismissed lawsuits against platforms arguing that users—not the platforms—created the harmful content.
But the conduct-versus-content distinction creates a new pathway: plaintiffs can argue that while the platform didn’t create a post about self-harm, it engineered features specifically to keep users engaged with such content longer. The distinction is narrow but powerful. Judge Kuhl’s ruling suggests that notification timing, the presence or absence of parental controls, and algorithmic recommendations designed to maximize engagement time are company conduct, not content-related issues. However, if platforms prevail on appeals, Section 230 protections could snap back into place, making this distinction temporary. Several social media companies have signaled they’ll challenge unfavorable verdicts in appellate courts, where Section 230’s scope will likely be litigated for years.

The Plaintiff’s Story and the Evidence of Platform Design
The plaintiff, identified only as “Kaley” or K.G.M., started using YouTube at age 6 and began posting on Instagram at age 9—an age when many children don’t fully understand the psychological impacts of social comparison, algorithmic amplification, or infinite-scroll design. Now 20 years old, she claims that the platforms’ engagement-focused features led her to spend excessive time consuming content that worsened her depression, leading to self-harm, including cutting. Her testimony represents thousands of similar claims filed by individual users, schools, and state attorneys general. The trial revealed internal company documents showing how meta and Google designed their algorithms and notification systems explicitly to maximize daily active users and time spent on platform. Witnesses testified about infinite scroll, auto-play videos, and algorithmic recommendations all working in concert to create what experts call a “supernormal stimulus”—design patterns that tap into basic human psychological vulnerabilities in ways that products like cigarettes or gambling machines do.
The difference is that social media companies had neuroscience research showing these effects on young users’ developing brains. One critical limitation in her case: causation is difficult to prove in mental health claims. Kaley’s depression could stem from multiple factors—family history, school stress, peer relationships—not solely platform use. However, the trial focused on whether the platforms’ design negligently exacerbated an existing vulnerability, not whether they solely caused her condition. This is a lower legal bar than proving direct causation, which improves plaintiffs’ odds.
Why TikTok and Snapchat Settled Before Trial
In January 2026, before the Los Angeles trial started, TikTok settled the lawsuit on undisclosed terms, as did Snapchat. Both companies clearly believed the risk of an adverse jury verdict outweighed settlement costs. TikTok’s pre-trial settlement is particularly significant because the platform is the subject of ongoing regulatory scrutiny and potential legislation. A high-profile loss in this case could have amplified calls for a federal ban or strict regulation.
Snapchat’s settlement similarly suggests the company weighed the reputational and financial risks of a jury verdict against settling quietly. Neither platform revealed settlement amounts, which means future plaintiffs can’t use those numbers as benchmarks—a tactic that protects settling defendants from becoming anchors for larger verdicts against non-settling defendants. Meta and YouTube, by contrast, chose to fight. Both companies likely believe they have stronger legal grounds to challenge the conduct-versus-content distinction on appeal, or they calculated that a jury verdict against them would still be survived through appellate reversal. This high-stakes gamble means that if they lose, plaintiffs’ attorneys will have a jury-validated roadmap for the remaining 1,700 cases in the MDL.

The Alleged Addictive Features at the Center of the Trial
Four specific design features feature prominently in plaintiffs’ claims: infinite scroll, auto-play video, frequent notifications, and recommendation algorithms optimized for engagement rather than user well-being. Each of these, plaintiffs argue, reflects deliberate choices by Meta and Google engineers to maximize the time users spend on platform, knowing that this increases exposure to content—including self-harm content, pro-eating disorder material, and other harmful categories that platforms struggle to moderate effectively at scale. Infinite scroll is the most obvious culprit. Without a natural stopping point (like the bottom of a page or a “next” button), users lose track of how long they’ve been scrolling and consume more content than they intend. Auto-play adds friction: users must actively choose to stop watching.
Recommendation algorithms trained on engagement metrics will prioritize extreme, inflammatory, or emotionally triggering content because such content generates clicks, comments, and shares. For vulnerable users like Kaley, algorithmic recommendations can surface self-harm communities and pro-eating disorder content, creating a feedback loop. However, these features also serve legitimate purposes. Infinite scroll provides a seamless user experience; auto-play reduces friction for consumers who want continuous video; recommendation algorithms can surface valuable, helpful content. The legal question isn’t whether these features exist, but whether their implementation, given what the platforms knew about their psychological effects on developing brains, was negligent or reckless.
What Meta and YouTube Face If They Lose
Meta disclosed in financial filings that potential damages in this litigation could reach the “high tens of billions of dollars” if the company is found liable. This isn’t speculation—it’s a risk assessment by Meta’s finance team. That number reflects the scale of the MDL: 1,700+ cases, with some representing individual plaintiffs and others representing class actions of school districts and families across the country. The trial also involves thousands of pages of internal company documents.
Emails and internal communications showing that Meta and Google executives understood the mental health risks of their platforms, yet continued to optimize for engagement over user safety, constitute what lawyers call “consciousness of guilt.” If the jury sees evidence that executives knew their products were psychologically harmful and chose engagement metrics over safeguards, punitive damages become a possibility—potentially doubling or tripling compensatory damages. One important limitation: even if Meta and YouTube lose this trial, the case will certainly be appealed. Appellate courts often reverse jury verdicts, especially on novel legal theories. A loss in the LA case doesn’t guarantee losses in parallel trials scheduled throughout 2026; each jury will make its own determination.

Why This Bellwether Trial Matters for the Broader Litigation Landscape
A bellwether trial is the first test case in a large MDL. Its outcome influences how other cases settle, how defendants value their risk, and whether remaining plaintiffs’ attorneys push for trial or negotiate settlements. With over 1,000 individual plaintiffs, hundreds of school districts, and dozens of state attorneys general in the queue, a plaintiff victory in the LA case would likely trigger a wave of settlements.
Conversely, a defense victory would embolden Meta and YouTube to fight more cases, potentially bankrupting smaller plaintiff firms that can’t afford multi-year trials. State attorneys general cases are particularly significant because they carry the threat of injunctive relief—not just damages, but court orders forcing platforms to redesign features or submit to regulatory oversight. Several states have filed suit under consumer protection statutes, not just on behalf of injured individuals but on behalf of the broader consumer population. A loss in the LA case could accelerate those state-level litigation strategies.
What Comes Next and How This Shapes the Future of Social Media Regulation
Regardless of the verdict, this trial has already shifted the legal and regulatory landscape. Judge Kuhl’s conduct-versus-content ruling has been cited in multiple other social media cases, creating a template for plaintiffs’ attorneys nationwide. Even if Meta and YouTube prevail on appeal, the ruling has demonstrated that Section 230 is not absolute—that judges can distinguish between user-generated content and platform design.
Parallel to this litigation, Congress and state legislatures are drafting bills to regulate social media. The trial’s evidence—internal emails, expert testimony about addictive design, data on mental health impacts—will likely inform those legislative debates. Lawmakers can point to trial evidence as justification for new regulations on algorithmic transparency, default settings, and parental controls. The trial, in other words, is shaping not just civil liability but potential statutory requirements.
