Jury deliberations are underway in a landmark social media addiction case in Los Angeles, where a 20-year-old woman identified as Kaley GM is suing Meta and YouTube for allegedly designing addictive features that harmed her mental health. The jury began deliberating on March 13, 2026, after approximately one month of testimony that examined how platforms like Instagram and YouTube may have contributed to the plaintiff’s depression, self-harm, and suicidal thoughts. This test case could reshape how courts hold social media companies accountable for the psychological effects of their platforms, potentially opening the door to thousands of similar lawsuits from other users who claim they were harmed by algorithmic features designed to maximize engagement.
The jury’s recent questions about the plaintiff’s family circumstances, her actual usage patterns as a child, and how to calculate damages suggest that jurors are seriously considering whether Meta and YouTube bear legal responsibility. With only nine of the 12 jurors required to reach a civil verdict, the threshold for liability is lower than in criminal cases, making a decision more likely.
Table of Contents
- What Is the Jury Evaluating in This High-Profile Social Media Addiction Case?
- What Evidence Did Jurors Hear During the Month-Long Trial?
- What Do the Jury’s Recent Questions Reveal About Their Thinking?
- Why Is This Case Considered a “Test Case” for Social Media Addiction Litigation?
- What Are the Strongest Arguments on Each Side?
- What Happens After the Jury Reaches a Verdict?
- What Does This Case Mean for the Future of Social Media Regulation?
What Is the Jury Evaluating in This High-Profile Social Media Addiction Case?
The case centers on whether Meta and Google-owned YouTube deliberately designed their platforms with addictive features that caused documented harm to a specific user. The plaintiff started using YouTube at age six and Instagram at age nine—platforms the lawsuit alleges deployed infinite scroll, autoplay functionality, and beauty filters specifically engineered to keep young users engaged regardless of mental health consequences. The complaint does not argue that social media use alone caused the plaintiff’s depression and suicidal ideation, but rather that the platforms’ architectural choices were a “substantial factor” in her harm, even though other life stressors and family circumstances may have contributed.
this legal standard—requiring only that the platforms be a substantial factor, not the sole cause—significantly lowers the evidentiary bar compared to traditional personal injury cases. The jury must weigh testimony about how these features function, industry documents about their intended effects, and expert opinions on addiction mechanics against the defendant’s arguments that parental supervision, individual resilience, and broader mental health factors were more decisive. The distinction matters enormously: if jurors find platforms liable under this standard, it suggests that knowing addiction-inducing design is enough to hold platforms accountable, regardless of how many other contributing factors existed in the plaintiff’s life.

What Evidence Did Jurors Hear During the Month-Long Trial?
Over approximately four weeks of testimony in Los Angeles courts, both sides presented expert witnesses, industry insiders, and potentially the plaintiff herself to establish what the platforms knew about their features’ addictive potential and what effects those features had on young users. The plaintiff’s legal team likely presented evidence of Meta and YouTube’s own internal research, design documents, and communications showing that engineers understood infinite scroll, autoplay, and algorithmic recommendation systems were engineered to maximize user engagement—sometimes explicitly at odds with user wellbeing. Expert witnesses probably testified about neurological responses to variable rewards, the dopamine-reinforcement loops created by notification systems and social validation metrics (likes, comments, shares), and how these mechanisms are particularly powerful during adolescence when the brain’s reward system is still developing.
However, the defendants faced the challenge that both Meta and YouTube can legitimately argue their features serve legitimate purposes beyond addiction: infinite scroll improves user experience, autoplay simplifies content discovery, and algorithmic recommendations surface content users actually want to watch. The companies likely presented their own experts arguing that depression and self-harm are multifactorial conditions involving genetics, family dynamics, peer relationships, and other life events—and that a 20-year-old with family troubles and other stressors cannot credibly attribute her mental health decline solely to how a platform’s feed is engineered. The jury heard evidence on both sides and must now decide where the causal weight lies.
What Do the Jury’s Recent Questions Reveal About Their Thinking?
The questions the jury submitted during the week of March 17-21, 2026, offer important clues about which arguments resonated and where jurors are focusing their deliberations. They asked about the plaintiff’s family troubles and circumstances—a signal that jurors are seriously evaluating whether factors outside of social media use played a decisive role in her mental health decline. They also requested clarification on how much Instagram the plaintiff actually used as a child, suggesting they are testing the strength of causation claims by examining usage intensity.
Most significantly, jurors asked about damages calculation, which typically indicates that a jury has already made a preliminary finding on liability and is now moving toward determining how much compensation is appropriate. This sequence of questions suggests jurors may be leaning toward finding at least partial liability, otherwise they would likely focus on threshold liability questions rather than damages methodology. The fact that they’re inquiring into specifics of the plaintiff’s usage and family context also suggests they are not accepting a straightforward narrative that social media alone caused harm—instead, they are conducting a detailed assessment of competing factors. This careful approach could cut either way: it might lead to a verdict holding platforms partially liable while acknowledging other contributing factors, or it might conclude that those other factors were more determinative than the platforms’ design choices.

Why Is This Case Considered a “Test Case” for Social Media Addiction Litigation?
This lawsuit is not an isolated dispute but rather the leading edge of what could become thousands of similar claims against Meta, YouTube, TikTok, Snapchat, and other platforms. Courts have not yet clearly established whether social media companies can be held liable for mental health harms caused by addictive design—there is no settled precedent on whether engagement-maximizing algorithms constitute a form of negligence or product liability. A verdict in favor of the plaintiff would signal to other potential claimants that they have a viable legal theory and that juries are willing to hold platforms accountable even when multiple factors contributed to harm. Conversely, a defense verdict would discourage similar litigation by establishing that addiction-inducing design alone is not enough to trigger legal liability.
The case is significant precisely because both outcomes carry enormous consequences. A plaintiff verdict could accelerate class action lawsuits, regulatory pressure, and calls for new legislation specifically addressing social media’s mental health effects on minors. A defense verdict would likely be cited for years as evidence that platforms are not legally accountable for how their features affect vulnerable users, potentially foreclosing this avenue of legal redress. Media attention to the jury deliberations reflects the broader cultural anxiety about social media’s role in rising rates of depression, anxiety, and self-harm among young people—a concern that has motivated multiple legislative proposals and investigations but has rarely been tested in court with actual jury deliberation.
What Are the Strongest Arguments on Each Side?
The plaintiff’s strongest argument is that Meta and YouTube are in the business of maximizing user engagement, and they have invested billions in making that pursuit more sophisticated and targeted. The platforms employ thousands of engineers whose job is to keep users scrolling, watching, and returning—the same engineering that makes slot machines addictive. Internal documents and whistleblower testimony have repeatedly revealed that these companies understand the mental health risks, particularly for adolescents, but have chosen engagement growth over user wellbeing. If jurors accept that this knowing choice constitutes legal responsibility, liability becomes fairly straightforward.
The defendants’ strongest counter-argument is that social media is one of many environmental factors in young people’s lives, and that attributing specific mental health outcomes to platform design requires ignoring the role of family dynamics, peer relationships, school stress, genetic predisposition to depression, and access to mental health care. Meta and YouTube did not force anyone to use their platforms, did not prevent the plaintiff from setting usage limits or seeking help, and did not prevent parents from monitoring their child’s online activity. The platforms can also argue that their features are used by billions of people without causing self-harm, which suggests that design choices are not themselves sufficient to cause serious mental illness. However, if other contributing factors made this plaintiff particularly vulnerable, and if the jury finds that the platforms’ design choices knowingly exploited that vulnerability, the causation argument becomes weaker.

What Happens After the Jury Reaches a Verdict?
Once the jury returns a verdict—which could occur within days or stretch over weeks depending on the complexity of deliberations—either side can appeal if they believe legal errors occurred during the trial. For the plaintiff, a victory would likely attract extensive media coverage and trigger a flood of new lawsuits from other social media users alleging similar harms. For the defendants, a loss could prompt immediate settlement discussions in pending similar cases and likely trigger appeals that could eventually reach higher courts with instructions to reconsider the legal standards. A defense verdict would likely be upheld as precedent that social media companies are not liable under current law for harms allegedly caused by addictive design.
Regardless of the outcome, this case will reshape how courts and regulators think about platform responsibility. A plaintiff victory would support calls for new legislation requiring social media companies to redesign their platforms to reduce addictive features, similar to how tobacco regulation followed litigation. A defense verdict might redirect that energy toward regulatory bodies rather than courts—toward state attorneys general, the Federal Trade Commission, and Congress to establish clearer rules about what design practices are permissible. The jury’s decision will likely influence pending litigation in California and other jurisdictions where similar cases have been filed.
What Does This Case Mean for the Future of Social Media Regulation?
The social media addiction litigation landscape is still in its earliest stages, with this trial serving as a high-profile test of whether courts can hold platforms accountable for psychological harms. Unlike tobacco litigation, which took decades and involved hundreds of cases before the industry was forced to change, social media litigation is compressed into a much shorter timeframe—platforms have existed for only 15-20 years, and the mental health crisis among young people has been documented and publicized extensively. This compression means that a single verdict could have outsized influence on how the industry, regulators, and legislators respond.
Pending legislation in California and federally would establish stricter requirements for how platforms can target minors and what algorithmic recommendations they can deploy. A plaintiff verdict would validate the legal theory behind these proposals and likely accelerate their passage. Conversely, successful defense arguments would support the industry’s position that existing consumer protection and product liability law already provides sufficient safeguards and that new regulation is unnecessary. The jury’s decision thus carries implications far beyond the individual award or judgment in this case.
