NO FAKES 2025: Another Bill Sacrificing Authors’ Free Expression for Industry Control

Over the last two years, we have seen growing concerns regarding deepfakes. Recently, the first federal legislation addressing the concerns, the TAKE IT DOWN Act, was signed into law. The push for more federal legislation does not seem likely to stop there. With broad industry support, the NO FAKES Act has made a comeback. Unfortunately, the new version fails to meaningfully address any key concerns raised by the creative and innovation community since the Act was first introduced last year. On July 8, a group of civil society organizations submitted a public letter cautioning against the threat the NO FAKES Act poses on “non-commercial, First Amendment-protected speech” and fair use; we agree with the message in the letter. In this blog post, we will explain why the NO FAKES Act is problematic for authors.

NO FAKES 2025 is framed as a response to potential harms posed by AI technologies—specifically, it purports to ban the unauthorized use of an individual’s voice or visual likeness. Such a proposition may sound appealing at first, especially before you read the actual bill language and realize its significant ramifications on free expression and public discourse. 

Introduced by Senator Coons, Blackburn, Klobuchar, and Tillis, and receiving endorsements from entertainment industry unions and dominant tech companies such as YouTube and OpenAI, the NO FAKES Act establishes a new federal right—the “digital replication right”—that functions in many ways like a sui generis intellectual property regime. It grants private parties extensive control over the digital simulation of their identity, and it does so using legal concepts that are often loosely defined, leaving wide interpretive gaps. The Act incorporates vague standards and obligations, yet lacks dependable procedural safeguards, raising serious concerns for writers, artists, documentary filmmakers and other creators—especially those working in online and remix communities.

The central feature of the Act is the creation of a “digital replication right” allowing individuals to control and license the use of their digital replicas (§2(b)(1)). A digital replica is defined expansively (§2(a)(2)) as a “highly realistic electronic representation that is readily identifiable” as an individual, even if the person does not actually appear in the depicted content. The focus is on whether a simulated voice or image resembles someone well enough to be “identifiable.” It elevates claims of resemblance over context and meaning of the underlying expression. Because human voice and appearance are both fluid and widely shared across individuals, the law’s expansive and undefined scope makes it unclear who—or what—counts as a protected likeness, rendering the subject matter of the right both overbroad and indeterminate. This creates legal uncertainty for anyone working with images or voices resembling that of a human, and is especially risky for authors who use the image or voice similar to any public figures, whether in fiction or commentary, even when there is no intent to deceive or profit.  

This broad reach is worrisome, because though the proposed bill claims to have exceptions built in for creators, the language in fact gives endless discretion to the platforms and other decision makers. For example, the most important exception for authors is likely the one allowing commentary and scholarship: “the applicable digital replica is produced or used consistent with the public interest in bona fide commentary, criticism, scholarship, satire, or parody.”  (As an aside: Other exceptions are narrower than this one purports to be, but somehow manages to be equally vague. Such as when describing the exception for documentary use: “the applicable digital replica is a representation of the applicable individual as the individual in a documentary or in a historical or biographical manner, including some degree of fictionalization, unless (1) the production or use of that digital replica creates the false impression that the work is an authentic sound recording, image, transmission, or audiovisual work in which the individual participated.” It may serve as a new party game to identify undefined vague terms in just this sentence alone!)

Who gets to determine what’s aligned with “public interest” or what’s “bona fide”? Is making fun of one’s country’s supreme leader a “bona fide” use that is “consistent with the public interest”? Many totalitarian regimes around the world would disagree, and it is only our hope that the US will never fall on that side. As the bill stands now, selective enforcement and over enforcement by the decision makers will be easy. Enforcement decisions are inevitably outsourced to companies like Youtube, who have a strong track record of making uninformed takedown decisions. There’s no requirement for a court order or proof—just a formal notice from a rights holder or their representative will suffice to kickstart the private censorship machine. Worse still, works may be taken down automatically based on resemblance, while authors receive no notice about the takedown nor could they appeal a platform’s decision. Unlike the DMCA, there is no built-in counter-notice process with NO FAKES. To restore their content, authors would need to go to court (—but if they go to court too slow, i.e., not within 14 days of realizing their work has been removed wrongfully, then there’s no formal way to address the takedowns, no matter how false or deceitful the takedown request or the decision to remove may be)!

Online creators will inevitably—with NO FAKES Act—be vulnerable to dubious takedown threats, lawsuits, or subpoena-based unmasking. As we have seen only too often, for creators without big legal teams, even the risk of a claim may be enough to silence a project. Ambiguity, not just restriction, becomes a tool for suppression. (See how the platforms have censored speech under the guise of Copyright to get a sense of how an even more sweeping law would empower unregulated censorship.) It is only natural for a massive platform like Youtube to support this bill, because it forces other emerging platforms to adopt similar takedown processes and implement a repeat infringer policy—which are bad for creators but also prohibitively costly for new platforms. ​​Yet, platforms will have no choice but to err on the side of removal to avoid penalties of up to $750,000 per work (§2(e)(4)).

For the average person, it is also problematic that the bill frames this digital replica right not as a privacy or reputational interest, but as a “property right” (§2(b)(2)(A)(i)). The right is transferrable upon death and renewable in 5-year increments (§2(b)(2)(A)(iv)) for up to 70 years post mortem. Rightsholders only need to show that the likeness was “publicly used” during the prior period and file a renewal notice with the Copyright Office (§2(b)(2)(D)). This is clearly drafted with public figures and Hollywood celebrities in mind—it is not the kind of digital replica protection the average citizen wants or needs. It effectively creates long-term commercial control over public figure’s voice and image—whether by family members, their estates, or big businesses. The proposed bill language also permits music labels and exclusive licensees to sue on behalf of deceased artists (§2(e)(1)(C)). For creators making documentary films, working on cultural history projects, or even authoring speculative fiction, this threatens a future in which depictions of public figures—living or dead—must be licensed from estates or risk litigation. 

This inevitably reinforces the dominance of those already well-positioned to manage rights portfolios—platforms, publishers, studios, and estates—while limiting participation by less-resourced individuals and communities. Big tech companies supporting this bill (such as Youtube, OpenAI) will further entrench their dominant market position by discouraging new market entrants, and the media companies supporting this bill (such as Warner, Disney, Sony, Universal) will stand to benefit financially. But for independent filmmakers, YouTubers, podcasters, and digital artists, the transaction costs and legal ambiguity may be enough to deter entire categories of storytelling. It represents yet another step away from a participatory, open cultural landscape—toward one in which expression is tightly managed by rightsholders with the resources to navigate or weaponize the system.

What makes the NO FAKES Act especially troubling is its rhetorical posture: it presents itself as a law that protects ordinary individuals from privacy violations and deepfake abuse, yet in practice, it only seeks to establish a burdensome enforcement regime that functions more as a tool for entrenching existing industry market power, at the expense of free expression.

A truly “bona fide” approach that is “consistent with the public interest” would instead focus narrowly on actual deception, impersonation, and harm, rather than resemblance alone, and would include clear statutory limits, meaningful procedural safeguards, and protections for transformative, critical, and non-commercial speech.


Discover more from Authors Alliance

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

1 thought on “NO FAKES 2025: Another Bill Sacrificing Authors’ Free Expression for Industry Control”

  1. JOSEPH VAN DE MORTEL

    Thanks for your vigilance. AI keeps pushing humanity into the background. This seems to reflect the limited wisdom of its creators, who will also be pushed down as the machine keeps growing and developing. We are already seeing mental illness (try schizophrenia) in younger users of AI, who are taught to identify with its addictive entertainment format. Part of the problem is that the danger is veiled by the economic values of productivity, quantity, efficiency, and speed.

    Joseph de Mortel
    Author of: Why We Can’t Meditate – A Psychology of Meditation

Scroll to Top