State AG Claims New Federal Agreement Restricts Social Media Platform Censorship

Yes. In late March 2026, the U.S. Surgeon General's office, Centers for Disease Control and Prevention (CDC), and Cybersecurity and Infrastructure...

Yes. In late March 2026, the U.S. Surgeon General’s office, Centers for Disease Control and Prevention (CDC), and Cybersecurity and Infrastructure Security Agency (CISA) agreed to a landmark 10-year consent decree that restricts how federal agencies can pressure social media platforms to remove content. The settlement, stemming from the Missouri v.

Biden case brought by the Attorneys General of Missouri and Louisiana, represents the first operational restraint on federal pressure on social media moderation decisions. Under the agreement, these agencies are prohibited from threatening social media companies with legal, regulatory, or economic punishment to compel removal of protected speech—conduct the lawsuit alleged had been used to suppress content about COVID-19, the 2020 election, and Hunter Biden’s laptop. This article examines what the federal agreement actually restricts, which agencies are bound by it, how the case unfolded, what conduct preceded the settlement, and what the restrictions mean in practice. The settlement reflects growing tension between free speech principles and content moderation, with implications for how government communicates with platforms going forward.

Table of Contents

What Are the Core Restrictions on Federal Agencies?

The 10-year consent decree imposes three primary restrictions on the bound agencies. First, federal agencies are prohibited from threatening social media companies with legal, regulatory, or economic punishment to compel them to remove or suppress content. Second, agencies cannot “unilaterally direct or veto” content moderation decisions made by platforms. Third, agencies may still publicly state that specific posts are inaccurate or misleading—but they cannot pair those statements with implied or explicit threats that create pressure for removal.

The distinction matters because it allows agencies to maintain a public voice while preventing coercion. A government health official could say, “This claim about a medication is incorrect based on our research,” but cannot follow that statement with language suggesting the platform faces regulatory consequences for hosting the post. The restriction targets the pressure mechanism itself, not the government’s ability to speak about false or misleading health information. However, one limitation is that the decree does not address behind-the-scenes conversations that do not involve explicit threats—agencies could theoretically still request removal informally, so long as no consequences are mentioned.

What Are the Core Restrictions on Federal Agencies?

What Conduct Led to This Settlement?

The Missouri v. Biden case, originally filed as Murthy v. Missouri, alleged that federal agencies had systematically pressured social media platforms to suppress protected speech across multiple contentious topics. The lawsuit claimed government pressure on platforms regarding COVID-19 posts—including content questioning vaccines or discussing treatment options that differed from official health guidance.

The case also alleged suppression of content about the 2020 presidential election and censorship of reports about Hunter Biden’s laptop. The scope of that alleged conduct raised concerns about government overreach in the digital era, where social media serves as a primary public square. Unlike direct government censorship, which violates the First Amendment, the alleged conduct operated through intermediaries—platforms making moderation decisions under pressure. The settlement addresses this middle ground by drawing a line: platforms can moderate content freely, but federal agencies cannot weaponize regulatory authority to force that outcome. A key limitation of the remedy is that it applies only to the named agencies (Surgeon General’s office, CDC, CISA) and the 10-year decree period—other agencies, state governments, and future administrations are not bound by this specific agreement.

Key Agencies Bound by Federal Censorship Restriction DecreeU.S. Surgeon General’s Office1number of agencies/entitiesCenters for Disease Control and Prevention1number of agencies/entitiesCybersecurity and Infrastructure Security Agency1number of agencies/entitiesNon-Bound Federal Agencies10number of agencies/entitiesState Governments50number of agencies/entitiesSource: Missouri v. Biden Settlement Agreement, March 2026

Which Federal Agencies Are Bound by the Agreement?

Three federal agencies are specifically bound by the consent decree: the office of the U.S. Surgeon General, the Centers for Disease Control and Prevention (CDC), and the Cybersecurity and Infrastructure Security Agency (CISA). These agencies were identified in the lawsuit as key actors in pressuring platforms on health, election, and cybersecurity-related content. The Surgeon General’s office, responsible for public health messaging, was allegedly involved in pressure regarding COVID-19 content.

The CDC, as the lead federal health agency, faced similar allegations. CISA, which focuses on infrastructure security and election integrity, was allegedly involved in pressure regarding election-related content and narratives about election security. The restriction to these three agencies reflects the specific conduct alleged in the case, but it also reveals a limitation: other federal agencies—the FBI, State Department, Treasury, Department of Justice, and others—are not directly bound by this decree. Each would require separate litigation or agreement to impose similar restrictions. Additionally, the decree does not address pressure from the White House, presidential staff, or the Executive Office of the President, which the lawsuit had also implicated but which fall outside the specific agencies named in the settlement.

Which Federal Agencies Are Bound by the Agreement?

What Can Federal Agencies Still Do Under the Agreement?

The consent decree is designed to preserve legitimate government communication while eliminating coercive pressure. Federal agencies retain the right to communicate directly with platforms about content—they can request information about policy decisions, ask questions about moderation practices, and explain why certain posts are inaccurate. They can publish statements, fact-checks, and public health guidance. Agencies can point out that posts contain false information, and they can urge platforms to consider their official guidance in making moderation decisions.

The key difference is the absence of implied or explicit consequences. An agency can say, “This post contradicts CDC guidance on vaccines,” but not, “If you don’t remove this post, we may investigate your data practices.” This distinction creates a tradeoff: agencies lose leverage to ensure compliance with their preferred moderation outcomes, but platforms retain independence and legal protection. A practical example: the CDC could issue a public statement saying that a popular post contains medical misinformation, but under the decree, it cannot threaten to report the platform to Congress or suggest regulatory retaliation for non-compliance. The limitation here is defining what counts as an “implied threat”—some conversations could fall into a gray area where the intent and effect are ambiguous.

What About Threats, Pressure, and Regulatory Consequences?

The decree explicitly prohibits “threatening” social media companies with legal, regulatory, or economic punishment to compel content removal. This covers direct threats (explicit warnings of investigation or enforcement) and indirect threats (statements that would reasonably be understood as threatening regulatory action). It also applies to economic pressure—for instance, an agency cannot suggest that a platform’s federal contracts are at risk based on its moderation decisions. However, the agreement contains an important limitation: it does not prevent agencies from pursuing legitimate regulatory or legal action against platforms for other reasons, such as genuine violations of data protection laws, antitrust concerns, or consumer protection statutes.

The key distinction is causation. An agency cannot punish or threaten to punish a platform because of content moderation choices, but it can enforce laws that exist for independent reasons. For example, if the FTC pursues a case against a platform for deceptive privacy practices, that is distinct from pressuring the platform to suppress speech. The challenge in enforcement will be proving the causal connection—establishing that a threat was made specifically to compel content removal, not for some other regulatory reason.

What About Threats, Pressure, and Regulatory Consequences?

What Are the Enforcement Mechanisms and Violations?

The consent decree is enforceable through federal court. If the bound agencies violate the terms, the parties can return to court to enforce compliance, seek monetary damages, or request injunctive relief. The agreement includes reporting requirements, allowing monitoring of whether agencies are adhering to the restrictions. However, a practical limitation is that proving a violation often requires access to internal government communications—emails, memos, and recorded calls between agency officials and platform representatives.

Without whistleblowers or discovery in related litigation, violations could occur without detection. The decree applies for 10 years from the date of settlement, which means the restrictions will remain in place through 2036. After that period expires, the restraints end unless renewed or extended. This creates a potential loophole: agencies could anticipate the expiration and resume pressure campaigns before the decade concludes, or new administrations might challenge the decree’s legitimacy. Additionally, the agreement does not prevent Congress from pressuring platforms or subpoenaing company records—it applies to executive branch agencies only, leaving the legislative branch’s influence on platforms largely unaddressed.

What Do State Leaders Say About the Settlement?

Louisiana Attorney General Liz Murrill, representing the state government alongside Missouri, stated that the settlement addresses the concerns raised during the litigation. Missouri Senator Eric Schmitt, who as Missouri Attorney General initiated the case, called the settlement “the first real, operational restraint on the federal censorship machine.” Schmitt’s language suggests the case was framed not merely as a legal dispute over agency authority but as part of a broader political debate about government power in the digital age.

The support from state-level officials underscores the settlement’s significance as a victory for those who argued federal agencies had overstepped their authority. However, critics might argue the decree does not address the deeper question of whether federal agencies should engage in platform communication at all, or whether some degree of government-platform coordination is necessary for public health and security purposes. The settlement represents a negotiated compromise rather than a complete ban on agency-platform contact.

What Happens Next for Free Speech and Government Communication?

The settlement establishes a precedent for restraining government pressure on platforms, but it applies only to the three named agencies and a limited timeframe. Future litigation will likely test the boundaries of what constitutes an illegal threat, how to prove causation between government statements and platform moderation, and whether other agencies should face similar restrictions. The case may inspire additional lawsuits against federal agencies not covered by this decree or encourage Congress to pass legislation codifying the principles.

Looking forward, the settlement highlights an ongoing tension in the digital age: how governments balance public health and security messaging with preserving free speech protections. As social media becomes more central to public discourse, disputes over government-platform relationships are likely to intensify, making this Missouri v. Biden settlement a foundation for future regulatory or legislative action.

Conclusion

The 10-year consent decree settling Missouri v. Biden represents a significant constraint on federal agency power to pressure social media platforms into content removal. The U.S. Surgeon General’s office, CDC, and CISA are now prohibited from threatening platforms with legal, regulatory, or economic punishment to compel moderation decisions, though they retain the ability to publicly comment on content accuracy and work with platforms on policy matters.

The settlement emerged from allegations that these agencies had pressured platforms to suppress posts about COVID-19, the 2020 election, and Hunter Biden’s laptop—conduct that raised free speech concerns about government overreach in the digital public square. For citizens and consumers watching these debates, the settlement affirms that platforms operate under legal constraints when government actors attempt to direct their moderation decisions. However, the decree’s limitation to three specific agencies and a 10-year window means that much government-platform interaction remains unaddressed. Ongoing litigation and congressional action will likely shape the broader framework governing how federal agencies engage with social media companies.


You Might Also Like