Grammarly โ€” now rebranded as Superhuman after its 2025 acquisition of the email productivity app โ€” is facing serious backlash over a feature that creates AI-generated editorial feedback under real people's names without their knowledge or consent.

The feature, called "Expert Review," launched quietly in August 2025. It presents AI-written suggestions as if they came from "leading professionals, authors, and subject-matter experts." The actual experts listed? They were never asked.

Who Got Cloned

Wired first exposed the practice, finding that Grammarly's AI clone list includes Stephen King, Neil deGrasse Tyson, and Carl Sagan โ€” who has been dead since 1996. Tech journalists weren't spared either: Verge editor Nilay Patel, Platformer's Casey Newton, Bloomberg's Mark Gurman, Wired's Lauren Goode, and dozens of others discovered their names attached to AI outputs they had no role in producing.

A fine-print disclaimer buries the key admission: the feature doesn't actually involve any of those people.

'Sloppelganger'

The internet responded with a new word. Bluesky user @lifewinning.com coined sloppelganger โ€” an AI doppelganger of a real person, made without permission and optimized for plausibility over accuracy. The term stuck fast.

Grammarly's Response

Rather than pulling the feature, Grammarly offered an opt-out email address: expertoptout@superhuman.com. No apology, no opt-in process. Critics noted that most people have no idea their names are being used in the first place.

Grammarly has 40 million daily active users and charges $144 per year โ€” reselling tokens from OpenAI and other LLM providers at a markup. The business logic of attaching credible human names to that AI output is clear. Whether it's legal โ€” or ethical โ€” is less so.