Lawsuit Claims Grammarly Used Writers Work and Identities to Train AI Without Consent

Yes, Grammarly is facing a class action lawsuit alleging that the company used the names, identities, and work of real writers—including Stephen King,...

Yes, Grammarly is facing a class action lawsuit alleging that the company used the names, identities, and work of real writers—including Stephen King, Neil deGrasse Tyson, and New York Times reporter Kashmir Hill—to train its artificial intelligence without their consent or knowledge. Journalist Julia Angwin filed the federal lawsuit in Manhattan on March 12, 2026, against Superhuman Platform, Inc. (Grammarly’s parent company), claiming that Grammarly’s “Expert Review” feature, launched in August 2025, attributed AI-generated writing feedback to real experts who never agreed to have their names or likenesses used. The lawsuit invokes New York and California right-of-publicity laws, which protect individuals from having their names and identities commercialized without permission.

The case centers on a feature Grammarly marketed to customers willing to pay $12 per month for purportedly expert writing feedback. Instead of genuine reviews from the named experts, users were receiving AI-generated suggestions falsely attributed to well-known authors, scientists, journalists, and public figures. In response to the legal challenge and public outcry, Grammarly CEO Shishir Mehrotra announced on March 11, 2026, that the company would disable the Expert Review feature.

Table of Contents

What Happened When Grammarly Launched the “Expert Review” Feature?

grammarly introduced “Expert Review” in August 2025 as a premium service claiming to provide writing feedback from real industry experts. For $12 monthly, users could submit their work and receive comments ostensibly from well-known writers, journalists, scientists, and thought leaders. However, the reviews were generated by artificial intelligence and falsely attributed to these real people without their knowledge or consent.

The feature essentially allowed Grammarly to use the reputations and names of celebrities and accomplished professionals to market an AI product, implying that these individuals had endorsed or were directly involved in providing the service. The named experts affected included author Stephen King, astrophysicist Neil deGrasse Tyson, Kashmir Hill (a technology reporter for The New York Times who writes about privacy and AI), and Julie Brill (a former Federal Trade Commission commissioner who specializes in privacy matters). None of these individuals granted permission for their names, likenesses, or professional identities to be associated with the Expert Review feature. The lawsuit characterizes this as a straightforward misappropriation of identity for commercial purposes—using famous names to drive subscription revenue for a premium service without those people’s knowledge or agreement.

What Happened When Grammarly Launched the

The lawsuit filed by Julia Angwin and her legal team at PRF Law invokes right-of-publicity statutes in new York and California, which are among the strongest privacy protection laws in the country. These laws prohibit the use of a person’s name, likeness, or identity for commercial purposes without consent. In essence, a company cannot use someone’s reputation to sell products or services without that person’s permission—regardless of whether the use is directly for advertising or whether it’s embedded in a product feature.

Grammarly’s situation presents a clear violation of these principles: the company derived commercial benefit from using real experts’ identities to market and sell a premium service, without ever obtaining permission. The lawsuit argues that even though Grammarly used AI to generate the actual feedback (rather than truly contacting the experts), the misappropriation of their names and professional standing for commercial gain remains illegal. This is particularly significant because it suggests that AI-generated content does not exempt companies from identity and publicity rights—a critical distinction as AI becomes more prevalent in business models.

Timeline of Grammarly Expert Review Lawsuit EventsAugust 2025100Feature Status (%)March 11 202675Feature Status (%)March 12 202650Feature Status (%)Present Status10Feature Status (%)Source: Public statements from Grammarly CEO and lawsuit filing documents

The affected individuals include some of the most recognizable names in their respective fields. Stephen King is one of the most successful and celebrated authors in American history, known for horror and suspense fiction. Neil deGrasse Tyson is the renowned director of the Hayden Planetarium and one of the country’s most prominent science communicators. Kashmir Hill earned recognition for her investigative reporting on privacy issues and technology companies, covering topics directly relevant to this very lawsuit. Julie Brill served as an FTC commissioner and is known for her work advocating consumer privacy protection.

These weren’t random names plucked from a database—they were carefully selected because their reputations carry weight and credibility. A writer considering Expert Review might feel reassured submitting their work to someone like Stephen King for feedback, or a tech-oriented person might value suggestions attributed to Neil deGrasse Tyson. Grammarly essentially traded on the professional standing and trust these individuals have built over decades. The fact that these are people whose life work involves creative writing, scientific communication, and privacy advocacy makes the misappropriation particularly egregious. They built their reputations on honesty and expertise; Grammarly used those reputations to sell an AI product they had nothing to do with.

Who Were the Experts Named Without Consent, and Why Does It Matter?

How Did Grammarly’s Training and AI Development Fit Into the Complaint?

The lawsuit alleges that beyond the Expert Review feature itself, Grammarly may have used writers’ work and data to train its underlying AI systems. Training large language models requires feeding them vast amounts of text data, and while some of this comes from public sources, using a writer’s published work or personal writing samples to improve an AI tool—especially without permission—raises serious questions about intellectual property and consent. The complaint suggests that Grammarly collected and utilized writers’ work not just for the Expert Review feature but as part of its broader AI model development.

This is particularly relevant for professional writers who may have uploaded their work to Grammarly for editing purposes, never imagining it would be used to train AI systems. Unlike hiring an editor or using a traditional grammar tool, writers had no clear notification that their work might become part of a training dataset. The distinction matters significantly: paying for a tool to improve your writing is very different from having that tool incorporate your work into AI that could eventually compete with you or replace you. The lawsuit’s implications extend beyond just the Expert Review feature to question how Grammarly collects, stores, and uses user-provided content.

What Were the Warning Signs That This Feature Was Problematic?

The Expert Review feature represented a significant red flag in how companies build and market AI products. For months, Grammarly advertised and sold this premium feature to customers who believed they were getting real expert feedback. The company continued taking payments and enrolling new subscribers despite the fact that the underlying premise—that real experts were providing the reviews—was false. This pattern of continuing to market and profit from a misrepresentation, even when the company likely knew it would face scrutiny, suggests a calculated decision to monetize the feature before consequences caught up.

Another concerning aspect: Grammarly’s business model appears designed to exploit the credibility gap between AI-generated content and human expertise. By attaching famous names to AI output, the company attempted to charge a premium price ($12/month) and differentiate its product in a crowded market. However, if a user is paying for expert feedback, they deserve actual expert feedback, not AI passing itself off as expert guidance. The warning here is broader than just Grammarly: as companies increasingly embed AI into products, claiming human authority or expertise when none exists—whether through attributed names, professional titles, or implied endorsements—consumers should be deeply skeptical.

What Were the Warning Signs That This Feature Was Problematic?

When Did Grammarly Disable the Feature, and Why?

CEO Shishir Mehrotra announced on March 11, 2026—just one day before the lawsuit was filed—that Grammarly would disable the Expert Review feature due to “recent feedback and scrutiny.” The timing of this announcement, combined with Julia Angwin’s lawsuit filing on March 12, 2026, suggests that Grammarly was aware of the legal vulnerability before the case was officially filed. By disabling the feature quickly, Grammarly attempted to limit ongoing harm and potentially reduce its liability exposure. However, disabling a feature after profiting from it for months does not erase the prior violations of right-of-publicity laws or necessarily shield the company from damages owed to affected users.

Grammarly’s quick response also did not prevent the class action lawsuit from moving forward, which indicates that the legal team at PRF Law believes the company’s exposure is substantial. The suit seeks compensation not just for continued use of the experts’ names but for all customers who purchased Expert Review under false pretenses. For users who paid for feedback they believed was coming from real experts, the discovery that they received AI-generated content attributed to famous names without those individuals’ consent could constitute fraud or breach of warranty.

What Does This Case Mean for the Future of AI and Personal Identity?

The Grammarly lawsuit represents a landmark moment in establishing that companies cannot use artificial intelligence as a shield against liability for misappropriating personal identity. As more companies develop AI products, they will be tempted to attach real people’s names, faces, or professional reputations to AI systems—whether for credibility, marketing, or brand recognition. This case signals that such practices violate existing law and expose companies to significant legal risk.

The application of right-of-publicity statutes to AI-driven identity misuse sets a precedent that will likely influence how other companies approach AI products. Going forward, expect to see increased scrutiny of AI features that claim to emulate specific experts, celebrities, or public figures. Companies will need to obtain explicit consent from any real person whose name, likeness, or professional identity is associated with AI-generated content. This case also underscores the importance of transparency in AI: if a company is using AI to generate content, users deserve to know that clearly and upfront, rather than discovering it after the fact or being misled by false attribution.

You Might Also Like

Leave a Reply