On March 24, 2026, a New Mexico jury handed Meta a historic defeat, ordering the company to pay $375 million for violating consumer protection laws and creating what prosecutors described as a “breeding ground” for child predators. This is the first time a state has prevailed at trial against a major technology company for child safety violations—a verdict that could reshape how courts and regulators evaluate social media platforms’ responsibility for harm to minors. New Mexico Attorney General Raúl Torrez brought the case in 2023, accusing Meta of knowingly designing Facebook and Instagram in ways that enabled child exploitation while deliberately ignoring internal warnings about the danger.
The case centered on a damning revelation: Meta’s own employees warned company leadership that their platform was being weaponized to solicit and distribute child sexual abuse material, yet executives disregarded those concerns and made decisions that would make the problem worse. During a nearly seven-week trial, evidence showed that internal messages from Meta staff raised alarms about how CEO Mark Zuckerberg’s plan to encrypt Facebook Messenger would effectively hide approximately 7.5 million reports of child sexual abuse material from law enforcement. The jury found Meta engaged in “unconscionable” trade practices that exploited children’s vulnerabilities and harmed their mental health and safety.
Table of Contents
- How Did Meta’s Internal Safety Warnings Become the Foundation of the Verdict?
- What Did the Jury Find Meta Did Wrong?
- How Did Investigators Uncover Evidence of Child Exploitation on Meta’s Platforms?
- What Does This Verdict Mean for Child Safety on Social Media Platforms?
- Why Does the $375 Million Penalty Represent Meta’s Worst-Case Scenario?
- Why Are 40+ States Now Suing Meta Over Similar Claims?
- What Happens in Phase Two, and Will Meta’s Appeal Succeed?
How Did Meta’s Internal Safety Warnings Become the Foundation of the Verdict?
The trial exposed a pattern: Meta’s own employees repeatedly warned leadership about child exploitation risks, yet the company proceeded with decisions that either ignored or exacerbated those warnings. Court documents revealed internal messages from Meta staff discussing the consequences of Zuckerberg’s 2019 announcement to make messenger encryption the default. The employees understood that end-to-end encryption, while protecting user privacy, would prevent Meta’s automated systems from detecting child sexual abuse material (CSAM) being shared on the platform. When they calculated the impact, they estimated that approximately 7.5 million CSAM reports currently being caught by Meta’s detection systems would no longer be visible to law enforcement—effectively removing a critical safeguard against child exploitation. These weren’t obscure technical discussions buried in a company Slack channel. They were explicit warnings raised through official channels that child protection on Meta’s platforms would deteriorate.
Yet the encryption rollout proceeded anyway. The jury saw these internal communications as evidence that Meta’s leadership was aware of the harm their decisions would cause to children and chose to move forward regardless. This wasn’t a situation where executives could claim ignorance about the risks; the company’s own safety teams had spelled them out in detail. The power of this evidence lay in its directness. Rather than asking a jury to infer what Meta knew or intended, prosecutors could point to contemporaneous documents showing exactly what employees had warned and when. The jury didn’t have to guess whether Meta understood the consequences of its actions—the internal messages proved it.

What Did the Jury Find Meta Did Wrong?
The jury returned a guilty verdict on violations of New Mexico’s consumer protection laws, finding that Meta engaged in “unconscionable” trade practices that unfairly took advantage of children’s vulnerabilities and inexperience. The verdict also included a specific finding that Meta harmed children’s mental health and safety. These aren’t abstract legal conclusions; they translate into concrete findings that Meta designed its platforms knowing they would be used to exploit minors and that the company prioritized other objectives (like encryption and user engagement) over protecting that vulnerability. “Unconscionable” is a legal term with specific meaning in consumer protection law. It describes practices that are so unjust, oppressive, or shocking to the conscience that a court will not enforce them.
The jury’s use of this term suggests they found Meta’s conduct egregious—not merely negligent or reckless, but deliberately indifferent to the suffering of children. The finding also matters because it can influence other courts and juries in the 40+ other state cases currently pending against Meta on similar grounds. However, the jury verdict doesn’t necessarily resolve every question about how to balance privacy and safety. Encryption can serve legitimate purposes, including protecting people from government surveillance and criminal activity. The jury didn’t rule that encryption is inherently harmful. Rather, they found that Meta should have implemented alternative safeguards before rolling out encryption across its most popular messaging platform, or that the company should have been transparent about the tradeoffs involved rather than presenting encryption as purely protective.
How Did Investigators Uncover Evidence of Child Exploitation on Meta’s Platforms?
One of the most striking parts of the trial involved the investigation methods the New Mexico Attorney General’s office used to document the problem. Investigators created multiple fake Facebook and Instagram profiles posing as children. these undercover accounts quickly encountered sexually suggestive content and, more disturbingly, direct requests for pornographic content. The ease and speed with which predators targeted these fake child accounts became powerful courtroom evidence. It wasn’t an abstract argument about algorithm design; it was a demonstration of how Meta’s platforms work in practice. Former Meta employees testified about specific design features that enabled predators to target and exploit children on the platform.
Law enforcement officials described how Facebook and Instagram have become primary hunting grounds for individuals seeking child sexual abuse material. New Mexico educators provided testimony about how their students have been exposed to exploitation on these platforms. This convergence of evidence from different sources created a picture that was difficult for Meta to counter: the platforms weren’t accidentally helping child exploitation; they were structurally designed in ways that made it easy. The fact that the Attorney General’s office didn’t need to wait for actual victims to make these demonstrations raises an important limitation on what this verdict can prove. The undercover operation proved that the platforms *could* be used to solicit child exploitation, but it doesn’t necessarily establish the frequency or scale of the actual problem in the real world. However, the jury apparently found this evidence sufficient to conclude that Meta knowingly created conditions conducive to exploitation.

What Does This Verdict Mean for Child Safety on Social Media Platforms?
This verdict signals that courts are now willing to hold tech companies financially accountable for how their platform design choices affect child safety—and that state attorneys general are empowered to pursue those cases. The $375 million penalty is substantial, but more important is the legal precedent. Meta cannot credibly argue to other judges and juries that it didn’t know its platforms could be weaponized to exploit children, because the New Mexico verdict establishes exactly the opposite. The verdict may also influence how Meta makes future design decisions. Any future changes to Meta’s encryption timeline, content moderation policies, or recommendation algorithms will be made in the shadow of this verdict.
Meta’s leadership now understands that decisions affecting child safety can result in direct financial liability. This creates an incentive to prioritize child protection in a way that mere regulatory guidance might not have done. However, the verdict doesn’t automatically mean all social media platforms will change their practices, nor does it establish clear legal standards for what design choices are acceptable. The case was specific to New Mexico’s laws and the specific facts of Meta’s platforms and decisions. Different states have different consumer protection statutes, and what constitutes “unconscionable” conduct might be interpreted differently in another jurisdiction. Other platforms might avoid similar liability by being more transparent about risks or by implementing different safeguards, even if they maintain similar features.
Why Does the $375 Million Penalty Represent Meta’s Worst-Case Scenario?
The $375 million verdict reflects the maximum penalty available under New Mexico law: $5,000 per violation, and the jury found that each violation justified the full statutory penalty. This isn’t a compromise verdict or a partial victory for either side; it’s the jury imposing the highest sanction the law allows. For context, Meta reported over $114 billion in annual revenue in 2024, making this penalty significant but not corporation-threatening. However, the verdict’s importance lies not in its absolute size but in what it signals about future liability. More consequential than the New Mexico verdict alone is what it will cost Meta in the ongoing litigation wave. Over 40 state attorneys general have filed lawsuits against Meta based on similar theories: that the company designed addictive features targeting young users and failed to protect against exploitation and mental health harms.
If even a fraction of those cases reach trial and produce similar verdicts, the cumulative liability could reach billions of dollars. Compare this to the typical regulatory settlement: Meta’s 2019 FTC settlement over privacy violations was $5 billion, which the company could absorb as a cost of doing business. Multiple nine-figure jury verdicts would signal a different order of accountability. A critical limitation here is that one jury verdict doesn’t guarantee other juries will reach the same conclusion. Meta will appeal the New Mexico decision, and appellate courts might overturn parts of the verdict or reduce the penalty. Courts in other states might interpret their consumer protection laws differently. The precedent is important, but it’s not binding on courts outside New Mexico.

Why Are 40+ States Now Suing Meta Over Similar Claims?
The New Mexico case emerged from a broader movement among state attorneys general to hold tech companies accountable for harms to children and young adults. More than 40 states have filed lawsuits accusing Meta of designing Instagram and Facebook specifically to be addictive and psychologically manipulative, particularly targeting teenagers. These cases allege that Meta knowingly caused mental health harms—including anxiety, depression, and eating disorders—and failed to disclose the risks to parents.
The meta-pattern across these state cases suggests a coordinated legal strategy. States are challenging Meta’s design choices from multiple angles: child safety in terms of sexual exploitation (the New Mexico case), addictive design features targeting youth mental health, and competitive practices that have eliminated alternative social networks. The New Mexico verdict provides a template and a legal advantage for other states: they can now point to a jury’s finding that Meta engaged in “unconscionable” practices, which strengthens their own cases immensely.
What Happens in Phase Two, and Will Meta’s Appeal Succeed?
The New Mexico trial’s verdict on consumer protection violations is only the first phase. On May 4, 2026, the judge—without a jury—will determine whether Meta created a “public nuisance” and, if so, whether the company should be required to fund public programs to address the alleged harms. This second phase could result in additional requirements beyond the monetary penalty, such as mandatory changes to Meta’s moderation systems, investment in child safety features, or educational programs about online exploitation. Meta has indicated it will appeal the verdict.
Appeals courts typically show deference to jury verdicts unless there is a clear error of law or the verdict is unsupported by evidence. Given the substantial evidence presented at trial—internal documents, expert testimony, undercover investigations—an appellate court might be reluctant to overturn the verdict entirely. However, Meta might succeed in reducing the penalty or arguing that some of the jury’s findings were not properly supported. The appeals process could take years, during which Meta’s liability remains uncertain.
