Will the Grammarly Lawsuit Show Us Yet Another Area Where Existing Law is Enough? (We Think So)

Ping pong balls on a mirror by Joselodos

In early March, investigative journalist Julia Angwin filed a class action lawsuit against Grammarly’s parent company, Superhuman Platform Inc. over an ill-considered feature of its writing service: an “AI tool” called Expert Review “that enabled Grammarly users to receive feedback on their writing from well-known journalists like Ms. Angwin, and even famous authors like Stephen King.” (Complaint at 1). The filing of the suit drew a flurry of attention. Now that the dust has settled down we offer this blog post as a deeper dive into Angwin’s claims in this suit, some context for how they relate to historical claims regarding reputation and rights of publicity, and how broader efforts to create new laws (such as the NO FAKES Act) factor in. 

In the complaint, Angwin takes aim at “Grammarly’s misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors.” In a nutshell, Grammarly offered a service that provided writing advice, identifying and associating that advice with the names of high profile writers, including Angwin, Stephen King, Neil deGrasse Tyson, Casey Newton, and many others, despite no relationship between these writers and Grammarly. Most, if not all of the writers were initially completely unaware of the existence of the feature.  

Casey Newton’s Grammarly turned me into an AI editor against my will and I hate it does an excellent job of conveying what it was like to use the (now discontinued) Expert Review tool: 

“After clicking the button, a box opened up to explain to me what was happening. A series of expert names appeared before me — each seemingly less likely to have ever agreed to this than the one before it. 

There, hovering near the top of the draft, was John Carreyrou, the investigative journalist and bestselling author who took down Theranos. I’d pay good money for advice from the real Carreyrou, whose dogged pursuit of the truth behind Elizabeth Holmes’ company in the face of overwhelming legal threats is the stuff of legend. Alas, the fake Carreyrou conjured by Grammarly offered only the most anodyne of advice.”

A few days after the complaint was filed, the New York Times published an essay by Angwin, Why I’m Suing Grammarly (paywall), in which she observed the tool’s propensity for offering advice untethered from the writer associated with it: 

“Pop a piece of prose into its service and little editing bubbles would emerge on the page from “Julia Angwin,” suggesting things like “Lead with personal stakes to boost immediacy.” That sentence about Meta was something Grammarly apparently thought I would suggest.”

In the essay, Angwin explains the significant harm she felt. Not only was Grammarly profiting from the use of her name, but the bland and sometimes terrible advice it was attributing to her risked causing her long-term reputational harm.

“Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).”

With these harms in mind, Angwin sued.  

Something can be done, right? 

The injury that Angwin and others have experienced here is not new. Laws have been on the books for over a century to address this type of harm. Roberson v. Rochester Folding Box Co (1902) offers an early example, where Abigail Roberson brought suit against Franklin Mills and the Rochester Folding Box Company, after they used her portrait in advertising posters without her consent. That case and others like it spurred the enactment of a range of laws meant to protect people from their likenesses being used, without consent, for commercial purposes.  

In Angwin’s complaint, there are four claims for relief, all grounded in well established law: (1) California’s Common Law Right to Publicity/Misappropriation of Likeness; (2) California Civil Code § 3344; (3) New York Civil Rights Law §§ 50 & 51; (4) Unjust Enrichment (“A plaintiff has a claim for unjust enrichment when the defendant was enriched at the plaintiff’s expense, and it is against equity and good conscience to permit the defendant to retain what is sought to be recovered.”)

Drilling down on one of these claims – California’s common law right of publicity requires four elements: (1) the defendant’s use of the plaintiff’s identity; (2) appropriation to the defendant’s advantage, commercially or otherwise; (3) lack of consent; and (4) resulting injury. 

Angwin seems to have a good chance of success here: (1) While the common law protection of “identity” is already fairly broad (in one instance, White v. Samsung Electronics, a robot dressed to look like Vanna White was enough), Grammarly used Angwin’s actual name, biographical details, and professional reputation; (2) Expert Review was a feature of Grammarly’s paid subscription product, generating revenue for a company earning $700 million annually with 40 million daily users; this very commercial service was trading heavily on the expertise of authors like Angwin; (3) most if not all of the authors did not consent to being used for this feature; (4) Injury could be the harm of having one’s reputation attached to AI-generated content of questionable quality and/or the economic harm of having one’s endorsement value exploited without compensation. 

Ultimately, this lawsuit is about a fairly clear harm, where the defendant probably lacks the First Amendment protections and other defenses that can arise in cases like these. And Grammarly appears to have a sense of how poorly its Expert Review feature was executed – it quickly discontinued the feature and issued an apology (“After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all.”

Where do we go from here? 

To start, we’re glad that Julia Angwin and similarly situated authors are highly likely to have viable legal remedies for the harms that occurred here. But if you read Angwin’s essay in the New York Times, it is clear that she wants the law to be an even more formidable sword and shield than it already is. After acknowledging her existing legal recourse, she makes a series of moves – she calls for a federal right of publicity and she nods to Denmark’s efforts to use copyright law to protect against deepfakes. (“I’d be happy to copyright myself — as copyright seems to be the only law that is regularly enforced on the internet these days.”) 

Here, we think it would be wise to tread carefully.  Developing a clear-eyed view of the current landscape would be a valuable first step and we’d recommend Professor Jennifer Rothman’s Reframing Deepfakes for better understanding the law and some of the possibilities. Rothman acknowledges that some categories of “deepfake” could serve socially beneficial purposes (“The same technology can help those who have lost or are losing the ability to speak to communicate or allow people to use their own voice to speak in foreign languages.”) Where new interventions are necessary, Rothman urges that they not exacerbate the confusion caused by overlapping laws already in existence (“Ideally, any new federal laws would try to thin out the “identity thicket” rather than worsen it.”)

In our rush to correct one harm, we often overcorrect and produce new ones. Rothman’s analysis is helpful here too – Reintroduced No FAKES Act Still Needs Revision notes provisions that would extend rights of publicity beyond death in potentially problematic ways (“As currently drafted, the proposed postmortem right would incentivize and, in some instances, force the commercialization of the dead against their wishes and their families’ desires.”) We at Authors Alliance have also voiced concerns about NO FAKES, as did many other organizations such as EFF and Public Knowledge (The NO FAKES Act Has Changed – and It’s So Much Worse; Letter on Concerns with NO FAKES). 

The Angwin lawsuit has a long way to go. But on the claims made, it seems to us a good illustration of how harms accelerated by new technology are not always so different in kind as to need entirely new legal structures to address them. Here an author suffered real and identifiable harm, and she has clear legal avenues to seek redress. The existing state right-of-publicity and privacy laws, imperfect though they may be, currently appear more than adequate to the task. This should give us some greater confidence as we seek to navigate a rapidly changing technology.  

Perhaps it will also help us practice some patience. The impulse to legislate in response to AI-related harms and perceived harms is understandable, but the risk of overcorrection is one we need to guard against. Broad new laws, whether they take the form of expansive publicity rights, fair use carve-outs, or blanket training-data restrictions, can create collateral damage that may be far harder to undo than the original harm. Federal law preempting state law, while offering uniformity, may risk privileging the interests of well connected stakeholders, a tradeoff that could end up undeserving those who currently benefit from existing state laws shaped over the last century.  Without careful process, these possibilities may ultimately disproportionately affect the authors and researchers that such laws are purportedly designed to protect.

Our position continues to be that we should not need to choose between protecting authors and preserving the public’s interest in the beneficial uses of AI. But getting that balance right requires resisting the urge to reach for or create the biggest available hammer when a more precise tool, or at least imperfect but sufficient tools, are already at hand. We hope that this Grammarly case will ultimately serve as a reminder and example that sometimes the law we need is the one we already have.  

[A final note/plug here: The Authors Alliance guide to Writing About Real People is a comprehensive legal resource designed to help authors, particularly those writing non-fiction, memoirs, and biographies, navigate legal issues related to portraying real individuals. It is also useful for understanding the issues present in this case.] 


Discover more from Authors Alliance

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top