Jury Still Considering Evidence in Landmark Meta and YouTube Trial

A jury in Los Angeles resumed deliberations on Tuesday in a landmark social media addiction trial against Meta and YouTube, after signaling difficulty...

A jury in Los Angeles resumed deliberations on Tuesday in a landmark social media addiction trial against Meta and YouTube, after signaling difficulty reaching consensus on one defendant just days earlier. The jury, which began deliberations on March 13, 2026, has moved past determining liability and is now focused on calculating financial damages for K.G.M., a 20-year-old plaintiff from Chico, California, who alleges that addictive features intentionally designed by both companies caused years of anxiety, depression, and body dysmorphia during her adolescence. This case has drawn intense scrutiny not only because of what’s at stake for the defendants, but because it could establish critical legal precedent for how tech companies are held accountable for the psychological harms caused by social media algorithms and design choices. Judge Carolyn B.

Kuhl has warned that if the jury deadlocks, the case will require a partial retrial—a costly and time-consuming outcome that underscores the complexity of the claims. The case coincides with a major victory for state regulators: on March 24, 2026, a New Mexico jury found Meta liable on all counts in a separate child safety trial and ordered the company to pay $375 million in damages. That verdict, reached after a six-week trial, focused on Meta’s failure to protect children from sexual predators and its “unfair and deceptive” business practices. Together, these two high-profile cases represent unprecedented challenges to Meta and YouTube’s business models and have created a critical moment for consumers harmed by social media platforms.

Table of Contents

What is the Los Angeles Social Media Addiction Trial?

The Los Angeles case centers on claims that Meta and YouTube deliberately engineered addictive features—including algorithmic recommendation systems, infinite scroll, push notifications, and engagement-driven content sorting—knowing they would harm young users. The plaintiff, identified only as K.G.M. to protect her privacy, filed suit alleging that both companies targeted her with predatory design practices during critical developmental years beginning in adolescence.

Unlike traditional product liability cases involving physical defects, this trial asks a fundamental question: Can social media platforms be held legally responsible for psychological and emotional injuries caused by features designed to maximize user engagement and advertising revenue? The trial is significant because it challenges the narrative that users simply “choose” to spend excessive time on social media. Instead, the plaintiff’s legal team argued that Meta and YouTube deployed sophisticated behavioral psychology techniques—the same methods used in gambling and addiction treatment research—to exploit vulnerabilities in adolescent brains. The defendants deny these allegations, contending that users voluntarily choose to use their platforms and that any negative mental health effects are not their legal responsibility. Judge Kuhl’s courtroom has become a focal point for tech accountability advocates and industry defenders alike, with the outcome potentially influencing how dozens of similar lawsuits proceed across the country.

What is the Los Angeles Social Media Addiction Trial?

The Plaintiff’s Testimony and Emotional Evidence

During trial, K.G.M. provided detailed testimony about her experience, describing how her initial social media use in adolescence quickly spiraled into a cycle of compulsive checking, anxiety over notifications, and comparison-driven depression when viewing peers’ curated content. Her testimony reportedly moved at least one juror to tears, suggesting the human impact of her allegations resonated powerfully in the courtroom.

The plaintiff’s mental health experts presented evidence that she developed symptoms consistent with behavioral addiction, characterized by withdrawal anxiety when separated from her devices and continued excessive use despite recognizing negative consequences—a pattern that parallels substance addiction in clinical terms. The key limitation in these cases, however, is proving causation: that Meta and YouTube’s specific design choices directly caused the plaintiff’s mental health injuries, rather than other factors like peer pressure, family stress, or pre-existing conditions. The defense presented their own experts arguing that the plaintiff’s symptoms could have multiple causes and that many adolescents use social media without developing serious mental health problems. this causation battle—separating what the companies did from what other factors may have contributed—is why the jury’s deliberations are proving difficult and why establishing consensus on both liability and damages is proving so challenging.

Meta Liability Findings: New Mexico Child Safety VerdictTotal Damages Award375% of liability countsUnfair Practices Finding100% of liability countsDeceptive Practices Finding100% of liability countsUnconscionable Practices Finding100% of liability countsChild Safety Violations100% of liability countsSource: New Mexico Jury Verdict, March 24, 2026

The New Mexico Child Safety Verdict Changes the Landscape

While the Los Angeles jury deliberates on addiction and damages, meta received a major blow on March 24, 2026, when a New Mexico jury found the company liable on all counts in a separate child safety trial and awarded $375 million in damages. Unlike the addiction case, the New Mexico verdict focused on whether Meta engaged in “unfair and deceptive” and “unconscionable” trade practices by failing to warn users and protect children from sexual predators. New Mexico prosecutors presented evidence that Meta knew its platforms were being exploited by predators to target minors, yet failed to implement adequate safeguards or be transparent about the risks.

Critical to the New Mexico verdict was testimony from Arturo Bejar, an ex-Meta engineering director, who described warning company executives after his own 14-year-old daughter received sexual solicitations on Instagram. Bejar discussed how Meta’s personalized algorithms—the same recommendation systems that maximize engagement—inadvertently benefit predators by helping them find and target vulnerable children more efficiently. The jury’s unanimous verdict suggests that evidence of knowable harm and inadequate response proved persuasive. This verdict sets a precedent for the Los Angeles case by demonstrating that juries are willing to hold Meta accountable for platform harms, even when the company claims it did not intend those harms.

The New Mexico Child Safety Verdict Changes the Landscape

Internal Communications and the End-to-End Encryption Controversy

One of the most damaging pieces of evidence in the New Mexico trial came from Meta’s own internal communications. Prosecutors revealed that executives discussed the implications of CEO Mark Zuckerberg’s 2019 announcement about end-to-end encryption on Instagram and other Meta platforms. Internal messages showed that company leadership understood the policy would prevent the reporting of approximately 7.5 million child sexual abuse material (CSAM) reports to law enforcement annually—a staggering figure that internal staff flagged as a significant child safety trade-off.

This evidence presents a critical dilemma for Meta: the company prioritized user privacy (through encryption) over the ability to detect and report child exploitation to authorities. While encryption itself is a legitimate privacy tool, the internal communications suggested Meta was aware of the specific quantity of harm prevention it was sacrificing. This comparison between privacy and safety proved devastating in the New Mexico courtroom and could influence how jurors in Los Angeles weigh Meta and YouTube’s competing interests. The disclosure raises questions about whether companies must transparently communicate such trade-offs to users and regulators before implementation.

How the Jury Investigation Revealed Meta’s Vulnerability Response

To build its case, New Mexico prosecutors used an innovative investigation method: they created undercover social media accounts posing as children and documented how quickly sexual predators approached them on Meta’s platforms. This undercover approach produced concrete evidence of the problem Meta allegedly knew about but failed to adequately address. Agents documented not just that solicitations occurred, but how Meta’s algorithmic recommendations—designed to find users with similar interests—could be weaponized by predators.

However, it is important to note that even this compelling evidence does not necessarily mean Meta acted negligently or unlawfully in how it responded to individual reports or complaints. The case hinged on the company’s overall business practices and whether it concealed known risks from consumers. The jury concluded it did, but future litigation will likely require defendants to show that they took reasonable precautions even if predatory exploitation still occurred. This distinction matters because it frames the liability question not as “did any harm happen” but rather “did the company’s practices or lack of transparency increase the risk of harm.”.

How the Jury Investigation Revealed Meta's Vulnerability Response

The Damages Phase and Why Jury Consensus Is Proving Difficult

In the Los Angeles trial, the jury has now moved past the liability phase (determining whether Meta and YouTube are legally responsible) and entered the damages phase (determining how much they must pay if found liable). This transition is significant because damages calculations require a different type of analysis: jurors must assess the plaintiff’s economic losses (medical expenses, lost wages) and non-economic damages (pain and suffering, emotional distress). The damages phase is where jury agreement becomes especially elusive, as different jurors may assign vastly different values to the same injury.

Judge Kuhl’s warning that the case will require a partial retrial if the jury deadlocks reflects the reality that damages disagreements are common in complex civil cases. Unlike liability questions, which often have clearer right and wrong answers, damages are inherently subjective. One juror might believe $10 million adequately compensates the plaintiff for years of anxiety and depression, while another juror thinks the figure should be $50 million or $5 million. The jury’s struggle to reach consensus—announced after just days of deliberations on a case involving months of trial testimony—suggests the damages range is particularly wide.

What a Verdict Could Mean for Future Class Actions and Tech Regulation

If the Los Angeles jury eventually returns a verdict finding Meta and YouTube liable and awards substantial damages, it could catalyze a wave of similar lawsuits from other individuals who claim their mental health was harmed by social media addiction. Plaintiff’s attorneys have been waiting for a successful precedent like this to expand litigation, and a verdict could transform how courts evaluate causation in technology harm cases. Conversely, if the defendants are acquitted or the jury deadlocks, it may signal that proving addiction causation in court remains exceptionally difficult, potentially slowing new litigation.

The combination of the New Mexico child safety verdict and the ongoing Los Angeles addiction trial also puts pressure on Congress and state legislatures to clarify the legal standards by which social media companies should be evaluated. Meta and YouTube have long relied on Section 230 of the Communications Decency Act, a federal law that traditionally shields platforms from liability for user-generated content. However, recent verdicts suggest courts are distinguishing between what users post and what companies design into their platforms—a distinction that could reshape tech regulation in the coming years.

You Might Also Like

Leave a Reply