The evidence emerging from social media lawsuits could fundamentally reshape how courts hold platforms accountable for harm to children. A New Mexico jury recently determined that Meta violated the state’s Unfair Practices Act by knowingly hiding what it understood about child sexual exploitation risks and mental health impacts—making it one of the first major jury verdicts finding the company directly liable. This single verdict, combined with internal documents showing Meta researchers calling Instagram “a drug” where they’re “basically pushers,” detailed reports from TikTok acknowledging that “minors do not have executive mental function to control their screen time,” and a major legal ruling that algorithmic design choices are the platform’s own conduct—not protected speech—represents a new class of evidence that could determine the winners and losers in hundreds of pending cases.
These cases are no longer abstract debates. Nearly 800 school districts, 40+ state attorneys general, and 1,745 federal plaintiffs in multidistrict litigation are all moving forward simultaneously. The evidence is specific: internal chats, clinical research, expert testimony, and now jury verdicts. What changes the outcome of future lawsuits is not just the strength of the claims—it’s the kind of proof that’s allowed into the courtroom and how juries interpret it.
Table of Contents
- How Internal Company Documents Are Becoming Powerful Trial Evidence
- Expert Clinical Evidence Reshaping What Courts Accept as Proof of Harm
- Jury Verdicts Breaking New Ground in Social Media Liability
- The Judge’s Ruling That Changed What Evidence Counts as “Conduct”
- The Sheer Scale of Litigation Creating Momentum for Future Settlements
- Early Settlements Signaling Risk Acceptance
- What the Evidence Tells Us About the Future of Social Media Litigation
How Internal Company Documents Are Becoming Powerful Trial Evidence
Meta, TikTok, and YouTube researchers wrote things they likely never expected a jury to see. An internal Meta chat showed researchers stating plainly: “IG (Instagram) is a drug … we’re basically pushers.” That’s not a lawyer’s interpretation; it’s the company’s own words. TikTok’s internal report documented that “minors do not have executive mental function to control their screen time”—a clinical observation that contradicts any defense claiming parents should simply monitor usage. YouTube staff commented that “[d]riving more frequent daily usage [was] not well-aligned with … efforts to improve digital wellbeing,” admitting the tension between engagement metrics and known harms.
What makes these documents so powerful is that they establish knowledge and intent. In civil litigation, especially claims about deceptive practices or design choices that prioritize engagement over safety, internal communications showing that the company understood the problem—and continued the behavior anyway—often become the centerpiece of a damages verdict. Attorneys discovered that Meta delayed producing 73,841 documents to plaintiffs, a delay so significant that sanctions were requested. When a company appears to hide evidence, juries tend to interpret missing or delayed documents as evidence of guilty knowledge. However, if a company can show it took corrective action after these internal discussions—implementing parental controls, adjusting algorithms, or funding safety research—courts may weigh that differently. That distinction matters for determining liability in future cases: the evidence of harm doesn’t automatically prove ongoing recklessness if remedial measures exist.

Expert Clinical Evidence Reshaping What Courts Accept as Proof of Harm
courts in social media cases are now allowing clinicians to testify directly about what they observe in their patients. A clinician survey revealed that 81% of clinicians report social media exacerbates anxiety disorders, 78% report it worsens depressive disorders, and 85% agree social media can be addictive. These aren’t abstract statistics—they’re data from practicing therapists, psychiatrists, and counselors who treat children every day. Expert testimony on how platform design features contribute to addiction is gaining acceptance as well.
Witnesses are explaining how notification timing, endless scroll feeds, algorithmic content selection, and absent parental controls all work together to create patterns of compulsive use. This is different from arguing that “screens are bad” or “social media causes depression.” Instead, experts are pointing to specific engineering choices—push notifications at times designed to maximize re-engagement, feeds that show increasingly extreme content to keep users scrolling—and explaining how those features create the same behavioral loops as gambling or other addictive behaviors. The limitation here is important: courts distinguish between correlation and causation. A clinician can testify that anxiety worsened after a patient started using Instagram, but a strong defense may argue that social anxiety disorder has many causes and that this particular patient’s symptoms correlate with other life events. The evidence becomes more persuasive when multiple clinicians describe the same pattern across hundreds of patients, and when expert testimony links specific platform features to documented psychological mechanisms of addiction.
Jury Verdicts Breaking New Ground in Social Media Liability
The new Mexico jury verdict against Meta represents a watershed moment. The jury found that Meta violated the state’s Unfair Practices Act, determining the company is harmful to children’s mental health and knowingly hid what it understood about the risks. This wasn’t a settlement or a nuisance payout—it was a jury of ordinary citizens concluding that Meta’s conduct crossed a legal line. That verdict becomes evidence itself in future cases. Plaintiff attorneys in other states can point to the New Mexico jury’s reasoning and say: “Other citizens have already decided this.” A Los Angeles trial involving a 20-year-old woman claiming Instagram and YouTube caused compulsive use and mental health struggles since childhood represents the next evolution.
This case is particularly significant because the plaintiff is documenting her harm timeline from childhood forward—showing a documented history of worsening mental health symptoms as platform features became more sophisticated. If the jury finds in her favor, it establishes a template for causation that future plaintiffs can mirror. However, jury verdicts are also unpredictable. A verdict in New Mexico or Los Angeles doesn’t guarantee an outcome in Texas, Florida, or Massachusetts. Juries in different regions may weigh parental responsibility differently, or may be more skeptical of claims linking social media use to mental health outcomes. The significance of these early verdicts is that they prove juries will listen to this evidence and can be persuaded—but they don’t eliminate variation in future outcomes.

The Judge’s Ruling That Changed What Evidence Counts as “Conduct”
Judge Carolyn Kuhl issued a ruling that could be decisive in multiple pending cases. She determined that algorithmic design choices qualify as the platform’s own conduct, not protected speech under Section 230. This distinction is crucial. Publishing content written by users—that’s protected. But the choices engineers make about when to notify someone, which posts to show, how long the feed scrolls, what comes next in the algorithm—those are the platform’s own decisions. They’re not speech; they’re conduct. This ruling means that platforms cannot hide behind free speech protections when defending their engineering choices.
A plaintiff attorney can argue: “Meta chose to use an algorithm that prioritizes content that generates the strongest emotional reaction, including anxiety-inducing content. That’s not publishing—that’s engineering a feature.” The company must defend the choice on its merits, not by saying algorithms are protected speech. In litigation, this distinction shifts the burden. The platform must now explain why its design choices were reasonable, not why they’re constitutionally protected. The limitation is that this ruling is not yet binding nationwide. Other judges in other federal courts may interpret Section 230 differently, particularly in more conservative districts. However, as more judges adopt Kuhl’s reasoning, the legal landscape becomes more consistent, and platforms face greater exposure across multiple jurisdictions.
The Sheer Scale of Litigation Creating Momentum for Future Settlements
The volume of pending cases is itself evidence that something is wrong. As of April 1, 2025, there are 1,745 lawsuits in federal multidistrict litigation against social media companies. The adolescent mental health MDL alone involves 2,172 actions. Forty-plus state attorneys general have filed their own lawsuits. Nearly 800 school districts have joined nationwide litigation against Meta, TikTok, and Snapchat. This isn’t a handful of outlier cases; this is a broad consensus across states, school systems, and state governments that these platforms have caused measurable harm. This scale matters because platforms typically settle to manage liability risk. When 1,745 federal cases are pending, the potential cost of defending all of them in court becomes unsustainable.
Settlement pressure increases. Early settlements from TikTok and Snapchat—while undisclosed in terms—signal that major platforms are finding it more economical to settle than to litigate. Future plaintiffs benefit from this pressure: if two major platforms have already settled, juries become more likely to believe that settlement is an admission of risk. However, a warning: the volume of litigation also increases defense sophistication. Platforms are consolidating their legal strategies, sharing expert witnesses, and developing coordinated responses to evidence. The 1,745 cases won’t all result in favorable verdicts for plaintiffs. Some will be dismissed on jurisdictional grounds, others will be defeated on merits. The size of the litigation landscape is an advantage for plaintiffs in aggregate, but not a guarantee in any individual case.

Early Settlements Signaling Risk Acceptance
Snapchat and TikTok have both settled their social media harm litigation, though the specific terms remain undisclosed. Undisclosed settlements often indicate that both parties wanted to avoid public jury verdicts—a strategic choice that typically means the defendant’s liability exposure was significant enough to motivate settlement, but the plaintiff’s proof wasn’t airtight enough to guarantee a massive jury award. When terms are withheld from public view, it’s often because the settlement amount or scope would have been embarrassing for one side to reveal.
The fact that platforms are settling at all is meaningful evidence in itself. Companies don’t settle cases they expect to win cleanly. These early settlements suggest that internal analysis at these companies indicates that juries are likely to find liability in future cases if similar evidence comes to trial. For plaintiffs in pending cases, these settlements become reference points: “Major platforms have already determined the risk is too high to litigate.”.
What the Evidence Tells Us About the Future of Social Media Litigation
The combination of internal documents, expert clinical evidence, jury verdicts, and new legal theories about algorithmic conduct is unlikely to disappear. If anything, future cases will benefit from the evidence already introduced in New Mexico, Los Angeles, and federal court. Plaintiff attorneys will have transcripts showing how expert witnesses explained notification timing, engagement loops, and addictive design. They’ll have internal company communications.
They’ll have the New Mexico jury’s reasoning to cite. Technology itself will evolve, and so will the evidence. As platforms implement changes like parental controls, usage limits, or algorithm adjustments—changes often made in response to litigation pressure—future cases will examine whether these changes were genuine or cosmetic. Did the platform wait until litigation forced action, or did it implement safeguards proactively? The evidence of company responsiveness (or delay) will become part of new cases.
