Grammarly Faces Lawsuit Over Alleged Unauthorized Use of Writer Data

Grammarly is facing a federal class action lawsuit filed on March 11, 2026, in the U.S. District Court for the Southern District of New York for allegedly...

Grammarly is facing a federal class action lawsuit filed on March 11, 2026, in the U.S. District Court for the Southern District of New York for allegedly using the names and identities of famous writers—including Stephen King—without their consent in a commercial AI product.

Journalist Julia Angwin initiated the case after discovering that Grammarly’s “Expert Review” feature, a $12-per-month tier launched in August 2025, simulated writing feedback from well-known journalists, authors, and editors as if they were personally providing editorial guidance, when in fact the feedback was AI-generated. According to Angwin’s legal team, approximately 40 to 50 people have already come forward reporting they were featured in the tool without permission, making this a significant privacy and publicity rights violation. This article examines the lawsuit’s details, the legal violations at stake, who is affected, and what this means for consumers who rely on AI writing tools.

Table of Contents

What Is Grammarly’s Expert Review Feature and What’s the Lawsuit Alleging?

The “Expert Review” feature was designed to offer users premium writing feedback by framing AI-generated corrections as coming from named, real-world writing professionals. When a subscriber paid for this $12-per-month tier, they would receive suggestions that appeared to be authored by specific journalists, novelists, editors, and other literary figures—prominent names that lent credibility and perceived expertise to the AI feedback. However, Grammarly never obtained consent from any of these individuals to use their names or create a commercial product featuring their identities. The lawsuit alleges that this constitutes unauthorized appropriation of the writers’ names for commercial purposes, a direct violation of their publicity and privacy rights under both New York and California law.

The core legal claim centers on identity misappropriation. When a company uses someone’s name, likeness, or identity to sell a product without permission, it violates state publicity rights laws that protect individuals from having their reputation and celebrity value exploited commercially. In this case, Stephen King, Julia Angwin, and other featured writers had no say in whether their names were attached to an AI product, and they received no compensation for the commercial use of their identities. Grammarly’s approach effectively created “AI doppelgängers”—AI-generated personas masquerading as real experts—which is distinct from the company simply using AI to provide writing feedback. The distinction matters legally: using AI for feedback is acceptable; falsely attributing that feedback to real, named individuals without consent is not.

What Is Grammarly's Expert Review Feature and What's the Lawsuit Alleging?

What Privacy and Publicity Rights Are at Stake?

Privacy and publicity rights laws protect individuals from having their identities, names, and likenesses commercialized without permission. New York and California both have strong statutes that prohibit this kind of appropriation. These laws exist because a person’s name and reputation have commercial value; if a company can freely use your name to sell products, it effectively steals that value from you. The plaintiff’s legal team argues that Grammarly violated these protections by creating a commercial feature built entirely around named individuals’ identities—simulating their editorial voices, expertise, and reputations—without paying them or even asking permission.

However, there’s an important caveat: these laws sometimes permit use of someone’s name in news reporting, commentary, or educational contexts without consent. The lawsuit’s strength depends partly on whether a court views the “Expert Review” feature as a purely commercial product (where consent is required) or something closer to endorsement-style content (where different rules may apply). Grammarly’s position that the feature was merely using AI and the writers’ names were incidental will likely face significant scrutiny, especially given that the feature prominently marketed real names as a selling point. The company’s rapid decision to disable the feature entirely within two days of the lawsuit filing suggests even Grammarly’s legal team recognized the vulnerability of their position.

Timeline of Grammarly Expert Review Feature Unauthorized UseFeature Launch8monthsLawsuit Filed3monthsFeature Disabled3monthsLegal Proceedings Begin1monthsSource: Public court filings and company announcements

Who Is Affected and What Happened to Them?

Julia Angwin, an acclaimed journalist known for her reporting on privacy and surveillance, discovered her name was being used in Grammarly’s Expert Review feature without her knowledge or consent. Upon learning this, she filed the class action lawsuit, which means she’s seeking to represent not just herself but all other writers who were similarly misused. Her legal team has already been contacted by 40 to 50 individuals who report they were featured in the tool without permission—a significant cohort that suggests this wasn’t an isolated mistake but a systematic practice across Grammarly’s product development.

The class action structure is important because it allows individuals who were harmed by the same practice to pool their claims and pursue collective relief. If the lawsuit succeeds, affected writers could potentially recover compensation not just for the unauthorized use of their names, but potentially for lost licensing fees, reputational harm, and other damages. Grammarly’s swift action—disabling the Expert Review feature on March 13, 2026, just two days after the lawsuit was filed—indicates the company recognized the problem was serious enough to warrant immediate remediation. However, disabling the feature doesn’t undo the damage to the writers whose names were already used commercially for months.

Who Is Affected and What Happened to Them?

What Did Grammarly Do in Response to the Lawsuit?

Immediately after Julia Angwin filed the class action lawsuit on March 11, 2026, Grammarly disabled the “Expert Review” feature on March 13, 2026. This rapid response was likely a legal strategy to minimize ongoing harm and demonstrate to the court that the company was taking the concerns seriously. By removing the feature, Grammarly halted further commercial use of the writers’ names and identities, which could be viewed favorably in settlement negotiations or trial.

However, disabling the feature is a reactive measure, not a solution to the core problem: Grammarly had already been commercializing these writers’ identities without consent for approximately seven months, from the feature’s August 2025 launch through March 2026. The damage to the affected writers’ reputations and the commercial benefit Grammarly derived from exploiting their names during that period remains unresolved. The lawsuit will likely determine whether Grammarly must pay compensation for that past unauthorized use and whether additional injunctions are needed to prevent similar practices in the future. The company has not yet publicly announced plans for how it will compensate the affected writers or what safeguards it will implement to prevent this type of violation from happening again.

What Are the Broader Privacy Implications for AI-Generated Content?

This lawsuit raises critical questions about how AI companies can ethically use real people’s names and identities in their products. As AI tools become more sophisticated and commercially valuable, the temptation to leverage the reputations of famous individuals grows. Grammarly’s case illustrates how a company might rationalize using a writer’s name without consent—framing it as mere attribution or stylistic branding—when legally and ethically, it constitutes unauthorized commercial appropriation. The distinction is crucial: AI companies can describe the function of their tools without attaching specific real people’s names to them.

An important limitation to understand: this lawsuit doesn’t mean companies can never use writers’ names in connection with AI tools. If Grammarly had licensed the Expert Review feature through agreements with the named writers—paying them, crediting them, and obtaining explicit consent—the product would likely be legal. Some AI companies are moving toward this model, securing partnerships and agreements before using anyone’s name commercially. However, Grammarly’s unilateral approach of using names without consent or compensation represents a violation of the rights Grammarly ignored. This case will likely establish precedent that AI companies must obtain consent before using individuals’ identities in commercial products, fundamentally changing how the industry approaches branding and feature development.

What Are the Broader Privacy Implications for AI-Generated Content?

What Could Affected Writers Receive if the Class Action Succeeds?

If the lawsuit is successful, affected writers and class members could receive compensation for the unauthorized use of their names and identities. In privacy and publicity rights cases, damages typically include actual monetary losses (such as licensing fees the writers could have charged had they been asked), punitive damages (to punish the company’s wrongful conduct), and attorney’s fees. Additionally, courts can issue injunctions requiring the company to take specific actions to prevent future violations, such as implementing consent procedures before using anyone’s name in commercial products.

The class action structure means that Julia Angwin and the other identified plaintiffs may recover more substantially than the 40-50 individuals already known to have complained, since the class could potentially include all writers featured in the Expert Review feature from August 2025 through March 2026. Class action settlements often result in per-person compensation amounts distributed among all class members, though in some cases, individuals can file individual claims for larger damages if they suffered specific harm. The process will likely involve a settlement period where Grammarly and the plaintiffs’ legal team negotiate a resolution, potentially without a full trial.

What Does This Mean for the AI Industry and Consumer Protections?

The Grammarly lawsuit is part of a broader reckoning in the AI industry over unauthorized use of copyrighted material, personal data, and identities in training and commercial products. Writers and creators have increasingly challenged AI companies that use their work without permission or compensation, arguing that their intellectual property and publicity rights are being violated. This case specifically targets misuse of identity and publicity rights—distinct from copyright claims—but it signals that regulators, courts, and society are becoming less tolerant of AI companies operating under an assumption of forgiveness-over-permission.

Looking forward, this lawsuit will likely accelerate industry adoption of consent-based practices for using real people’s names and identities in AI products. Companies that previously operated in gray areas—using names without explicit agreements—will face increased legal risk and reputational pressure to secure proper licensing and consent. For consumers, this case illustrates the importance of understanding what data and personal information companies are using and how they’re deploying it, particularly when a product’s appeal is built on associations with real, named individuals.

Conclusion

Julia Angwin’s class action lawsuit against Grammarly represents a significant legal challenge to the AI industry’s practices around using writers’ and public figures’ names without consent. The lawsuit alleges that Grammarly’s “Expert Review” feature violated privacy and publicity rights laws by commercializing the identities of dozens of journalists, authors, and editors—including Stephen King—through an AI product that simulated their editorial expertise without permission or compensation. Grammarly’s decision to disable the feature within two days of the lawsuit being filed demonstrates the company recognized the vulnerability of its position and the strength of the plaintiffs’ legal claims.

If you were featured in Grammarly’s Expert Review feature without your knowledge or consent, you may be eligible to join this class action lawsuit and potentially recover compensation. Affected writers should monitor the case’s progress and consider reaching out to the legal team handling the claim if they have documentation of how the unauthorized use affected them. For Grammarly users and the broader public, this case underscores the importance of demanding transparency from AI companies about how they use real people’s identities and data, and insisting on consent-based practices rather than assuming broad commercial rights to anyone’s name or likeness.


You Might Also Like