On March 24, 2026, the Trump Administration agreed to a landmark federal consent decree resolving Missouri v. Biden, a case challenging government pressure on social media platforms to remove constitutionally protected speech. The settlement prohibits the U.S. Surgeon General, Centers for Disease Control and Prevention (CDC), and Cybersecurity and Infrastructure Security Agency (CISA) from pressuring Facebook, Instagram, X, LinkedIn, and YouTube to censor content—a major win for free speech advocates who argued that agencies had systematically pushed platforms to remove content about controversial topics ranging from election integrity to vaccine safety concerns.
The 10-year decree, once approved by the court, represents the first major settlement of its kind, establishing a binding legal framework that prevents federal agencies from exercising influence over private platform moderation decisions. The case itself was brought by the states of Missouri and Louisiana, represented by the New Civil Liberties Alliance (NCLA), a nonprofit legal organization focused on constitutional rights. The lawsuit alleged that federal agencies had crossed a constitutional line by pressuring platforms to remove or suppress content the government deemed “misinformation”—a claim that challenged government power in the digital age and raised fundamental First Amendment questions about who can be held accountable for censorship when private companies act in response to government pressure.
Table of Contents
- What Legal Arguments Made the Missouri v. Biden Case Significant?
- Core Restrictions: What Federal Agencies Can and Cannot Do
- Which Federal Agencies Are Bound and Why They Matter
- How the Consent Decree Changes the Relationship Between Government and Social Media Platforms
- What Exactly Counts as Prohibited “Pressure” Under the Decree?
- The 10-Year Duration and Court Approval Process
- Precedent and Implications for Government Regulation of Digital Speech
What Legal Arguments Made the Missouri v. Biden Case Significant?
The missouri v. Biden case centered on a novel legal argument: that when federal government agencies pressure private social media companies to censor speech, it may constitute state action violating the First Amendment, even though the platforms themselves are private entities. Traditionally, the First Amendment restricts government censorship directly, not the editorial decisions of private companies. However, the plaintiffs argued that if government officials sufficiently pressure private platforms to remove content, the resulting censorship becomes effectively a government action, and therefore unconstitutional.
This argument pushed the boundaries of how courts think about free speech protection in the digital age, where government influence over private platforms could have effects nearly identical to direct government censorship. The consent decree represents a practical acknowledgment of this concern: rather than litigate the constitutional question to a final judgment, the Trump Administration agreed to stop the challenged conduct. The settlement doesn’t technically resolve whether the government’s past actions violated the Constitution—instead, it simply binds the specified agencies to refrain from pressuring the specified platforms going forward. This approach allowed both sides to avoid a Supreme Court battle that could have created precedent affecting government agencies across the board.

Core Restrictions: What Federal Agencies Can and Cannot Do
Under the consent decree, the U.S. Surgeon General, CDC, and CISA are now legally prohibited from pressuring social media platforms—Facebook, Instagram, X, LinkedIn, and YouTube—to censor, suppress, demonetize, or remove constitutionally protected speech. The decree forbids agencies from threatening platforms with regulatory action, offering regulatory benefits in exchange for content removal, or publicly criticizing platforms in ways designed to coerce content moderation decisions. In practice, this means a CDC official cannot contact Facebook’s head of policy requesting that false vaccine information be removed, nor can CISA pressure Twitter to take down content about election security that the agency disagrees with, even if officials believe the content is inaccurate or harmful.
However, a critical limitation applies: the decree only covers the five named platforms and only binds the three specified federal agencies. It does not apply to other federal agencies (the FBI, Department of Justice, Department of Homeland Security, or any others), nor does it create restrictions for state governments or local law enforcement. Additionally, the decree does not apply to other social media platforms like TikTok, Snapchat, Telegram, or smaller platforms—only Facebook, Instagram, X, LinkedIn, and YouTube are covered. For example, if the FBI believes that Telegram is being used to coordinate extremist activity, the decree does not restrict the FBI’s ability to pressure Telegram about that content. This narrow scope is important to understand: the consent decree is a settlement between specific parties, not a blanket prohibition on all government-platform interactions.
Which Federal Agencies Are Bound and Why They Matter
Three federal agencies are specifically named in the consent decree: the U.S. Surgeon General, the CDC, and CISA. The Surgeon General is the nation’s chief health advocate and has significant platform influence in public health messaging, particularly around disease, vaccines, and health misinformation. The CDC, as the federal agency responsible for disease control and prevention, became a central focus of the litigation because it had reportedly flagged content about COVID-19 treatments and vaccines to social media platforms for removal.
CISA, the federal cybersecurity agency, was included because it had engaged with platforms about election security misinformation and content it believed posed threats to critical infrastructure. These three agencies were not randomly selected—they were the ones whose documented communications with social media companies formed the factual basis of the lawsuit. Other federal agencies with significant public-facing communications—such as the National Institutes of Health, the Food and Drug Administration, or the State department—are not bound by this particular decree, meaning they retain more flexibility in how they communicate with platforms about content concerns. The practical impact is that the CDC cannot email Facebook about a viral claim regarding a new disease outbreak and request removal, but the Department of State could theoretically still contact platforms about foreign disinformation campaigns targeting U.S. infrastructure, depending on how broadly or narrowly courts might interpret the decree in future disputes.

How the Consent Decree Changes the Relationship Between Government and Social Media Platforms
For social media platforms, the consent decree provides legal protection against certain forms of government pressure that had become routine. Before the settlement, platforms faced pressure from multiple directions: they could receive requests from federal agencies to remove content, requests from Congress members in public hearings or private meetings, and requests from state attorneys general. The consent decree does not eliminate all government pressure, but it does create a clear legal boundary for three agencies and five platforms. If the CDC tries to pressure Facebook to remove content about a treatment the agency opposes, Facebook can now point to the consent decree as a legal shield against that pressure.
However, the practical effect depends on implementation and enforcement. The decree will require court oversight to ensure compliance—either through the plaintiffs filing enforcement motions if they believe violations occur, or through regular reporting by the agencies to the court. The decree’s effectiveness will depend on what counts as “pressure.” Does it include public statements? Private conversations? Social media posts? Agencies may argue that general public messaging about health concerns is distinct from “pressuring” platforms, while the plaintiffs may interpret any government agency communication to a platform about content removal as prohibited. For example, if the Surgeon General tweets that a particular claim about vaccines is false, is that pressure on the platform to act, or just protected speech by the government official? The decree’s language will likely be tested in court if disputes arise.
What Exactly Counts as Prohibited “Pressure” Under the Decree?
The consent decree language prohibits agencies from “pressuring” platforms—but “pressure” can take many forms, and the decree will inevitably be interpreted through future disputes. Clear violations would include explicit demands: a CDC official telling Facebook’s leadership “remove this post or we will recommend Congress defund you” is unambiguous censorship pressure. Similarly, conditional offers—”we will help you with regulatory approval if you remove this content”—would clearly violate the decree. Public statements designed to shame platforms into action might also cross the line; if a government official holds a press conference specifically to demand that platforms remove particular content, that could constitute prohibited pressure.
Gray areas will almost certainly emerge. What if a government scientist publishes an article in a medical journal explaining why a particular claim is scientifically false, and the platform sees the publication and adjusts its content policies? That’s not direct pressure, but it’s influence. What if an agency holds a meeting with a platform to discuss misinformation generally, without naming specific content? Is that pressure? What if an agency simply informs a platform about content it believes violates the platform’s own terms of service—is that pressure, or helpful information sharing? These questions will likely require future court decisions, and the decree’s actual impact on government-platform relations will depend on how strictly or generously courts interpret the “pressure” language. The safer course for agencies is to cease direct communications with platforms about specific content, but some government officials may test the boundaries.

The 10-Year Duration and Court Approval Process
The consent decree is set to remain in effect for 10 years once approved by the court—a substantial timeframe that reflects the seriousness of the settlement from both sides. Court approval is not automatic; the judge must determine that the settlement is fair and reasonable before it becomes enforceable. The timeline for court approval was not specified in the public announcement as of March 24, 2026, but the process typically takes weeks to months. Once approved, the decree becomes binding law, and violations could result in contempt charges against agency officials or agency enforcement actions.
During those 10 years, the landscape of social media, federal regulation, and public health crises may change significantly. A new federal administration could attempt to withdraw from the settlement, though doing so would likely require court approval and would face legal challenge from Missouri and Louisiana. The decree also does not address what happens after the 10 years expire—it could be extended, modified, or allowed to lapse depending on the political and legal context at that time. This means the consent decree creates a temporary framework rather than a permanent change to the legal relationship between government and social media platforms.
Precedent and Implications for Government Regulation of Digital Speech
The Missouri v. Biden settlement is significant because it is the first major federal consent decree explicitly addressing government-platform coordination on content moderation. It signals that courts may recognize limits to government pressure on private platforms, even when the government acts with good intentions (such as fighting misinformation about public health). The settlement occurred under the Trump Administration, which had been critical of perceived censorship of conservative content, but the legal principle—that government pressure on private companies to censor speech may violate the First Amendment—could apply regardless of which party controls government or which content is at issue.
Looking forward, the decree may inspire similar lawsuits against other federal agencies (the FBI, Homeland Security, the State Department) or against state and local law enforcement that pressure platforms. It may also shift how federal agencies approach public health communication, moving them toward direct public messaging rather than behind-the-scenes requests to platforms. However, the decree does not resolve the underlying tension: federal agencies genuinely believe that false health information can cause harm, and platforms face pressure from multiple directions to remove content. The settlement establishes a legal boundary, but the cultural and political debate about government, tech platforms, and free speech will likely continue.
