Grammarly, the popular writing assistant, faces a class-action lawsuit after rolling out a controversial feature that simulates editorial feedback using the names and voices of real writers, critics, and experts without their consent. The feature, dubbed “Expert Review,” allowed paying subscribers to receive critiques supposedly from figures like Stephen King, Carl Sagan, and tech journalist Kara Swisher.
The Core of the Dispute: Unauthorized Use of Likenesses
The lawsuit, filed by journalist Julia Angwin, argues that Grammarly’s parent company, Superhuman, violated the privacy and publicity rights of the individuals it impersonated. Angwin, who has spent years investigating tech companies’ privacy practices, stated she was “distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.” A class-action structure means that other affected writers can join the suit.
The Feature’s Flaws: Generic Feedback and Questionable Value
The “Expert Review” feature — which cost users $144 per year — was widely criticized for delivering uninspired, generic feedback. Tech newsletter founder Casey Newton tested the feature by submitting his own article and receiving “advice” from an AI simulation of Kara Swisher. The response: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?” Newton shared this with the actual Kara Swisher, who responded with a blunt threat to Grammarly.
Grammarly’s Response and Backlash
Following the uproar, Grammarly disabled the “Expert Review” feature. Superhuman CEO Shishir Mehrotra issued an apology while simultaneously defending the feature’s underlying concept, suggesting it could allow experts to “build that same ubiquitous bond with users” as Grammarly itself.
This case highlights a growing tension between AI-driven personalization and the rights of individuals whose likenesses are exploited in the process. As AI tools become more sophisticated, questions around consent, ownership, and intellectual property will only become more urgent.
The lawsuit underscores a critical issue in the rapidly evolving landscape of artificial intelligence: the unauthorized use of personal identity. It remains to be seen how the courts will rule, but the incident has already sparked a wider debate about the ethical boundaries of AI-driven technologies.
