Category Archives: Law and Policy

Coalition Letter to Congress on Copyright and AI

Posted September 11, 2023

Photo by Chris Grafton on Unsplash

Earlier today Authors Alliance joined a broad coalition of public interest organizations, creators, academics, and others in a letter to members of Congress urging caution when considering proposals to revamp copyright law to address concerns about artificial intelligence. As we explained previously, we believe that copyright law currently has the appropriate tools needed to both protect creators and encourage innovation.  

As the letter states, the signatories share a common interest in ensuring that artificial intelligence meets its potential to enrich the American economy, empower creatives, accelerate the progress of science and useful arts, and expand humanity’s overall welfare. Many creators are already using AI to conduct innovative new research, address long-standing questions, and produce new creative works. Some, such as some of these artists, have used this technology for many years.

So our message is simple: existing copyright doctrine has evolved and adapted to accommodate many revolutionary technologies, and is well equipped to address the legitimate concerns of creators. Our courts are the proper forum to apply those doctrines to the myriad fact patterns that AI will present over the coming years and decades. 

You can read the full letter here. 

Current copyright law isn’t perfect, and we certainly believe creativity and innovation would benefit from some changes. However, we should be careful about reactionary, alarmist politics. It seldom makes for good law. Unfortunately, that’s what we’re seeing right now with AI, and we hope that Congress has the wisdom to see through it. 

We encourage our members to reach out to your own Congressional representative to express the need to tread carefully, and (if you are) to explain how you are using AI in your work.  We’d also be very happy to hear from you as we develop our own further policy communications to Congress and to agencies such as the U.S. Copyright Office. 

Authors Alliance and Allies Petition to Renew and Expand Text Data Mining Exemption

Posted September 6, 2023
Photo by Alina Grubnyak on Unsplash

Authors Alliance is pleased to announce that in recent weeks, we have submitted petitions to the Copyright Office requesting that it recommend renewing expanding the existing text data mining exemptions to DMCA liability to make the current legal carve-out that enables text and data mining more flexible, so that researchers can share their corpora of works with other researchers who want to conduct their own text data mining research. On each of these petitions, we were joined by two co-petitioners, the American Association of University Professors and the Library Copyright Alliance. These were short filings—requesting changes and providing brief explanations—and will be the first of many in our efforts to obtain a renewal and expansion of the existing TDM exemptions. 


The Digital Millennium Copyright Act (DMCA) includes a provision that forbids people from bypassing technical protection measures on copyrighted works. But it also implements a triennial rulemaking process whereby organizations and individuals can petition for temporary exemptions to this rule. The Office recommends an exemption when its proponents show that they, or those they represent, are “adversely affected in their ability to make noninfringing [fair] uses due to the prohibition on circumventing access controls.” Every three years, petitioners must ask the Office to renew existing exemptions in order for them to continue to apply. Petitioners can also ask the Office to recommend expanding an existing exemption, which requires the same filings and procedure as petitioning for a new exemption. 

Back in 2020, during the eighth of these triennial rulemakings, Authors Alliance—along with the Library Copyright Alliance and the American Association of University Professors—petitioned the Copyright Office to create an exemption to DMCA liability that would enable researchers to conduct text and data mining. Text and data mining is a fair use, and the DMCA prohibitions on bypassing DRM and similar technical protection measures made it difficult or even impossible for researchers to conduct text and data mining on in-copyright e-books and films. After a long process which included filing a comment in support of the exemption and an ex parte meeting with the Copyright Office, the Office ultimately recommended that the Librarian of Congress grant our proposed exemption (which she did). The Office also recommended that the exemption be split into two parts, with one exemption addressing literary works distributed electronically, and the other addressing films. 

While the ninth triennial rulemaking does not technically happen until 2024, petitions for renewals, expansions, and new exemptions have already been filed. 

Our Petitions

Back in early July, we made our first filings with the Copyright Office in the form of renewal petitions for both exemptions. For this step, proponents of current exemptions simply ask the Copyright Office to renew them for another three year cycle, accompanied by a short explanation of whether and how the exemption is being used and a statement that neither law nor technology has changed such that the exemption is no longer warranted. Other parties are then given an opportunity to respond to or oppose renewal petitions. The Office recommends that exemption proponents who want to expand a current exemption also petition for its renewal—which is just what we did. In our renewal petitions, we explained how researchers are using the exemptions and how neither recent case law nor the continued availability of licensed TDM databases represent changes in the law or technology, making renewal of the TDM exemptions proper and justified. The renewal petitions follow a streamlined process, where they are generally simply granted unless the Office finds there to be “meaningful opposition” to a renewal petition, articulating a change in the law or facts. You can find our renewal petition for the literary works TDM exemption here, and our renewal petition for the film TDM exemption here.

But we also sought to expand the current exemptions, in two petitions submitted a few weeks back. In our expansion petitions, we proposed a simple change that we would like to see made to the current DMCA exemptions for text data mining. In the exemption’s current form, academic researchers can bypass technical protection measures to assemble a corpus on which to conduct TDM research, but they can only share it with other researchers for purposes of “collaboration and verification.” We asked the Office to permit these researchers to share their corpora with other researchers who want to use the corpus to conduct TDM research, but are not direct collaborators. However, this second group of researchers would still have to comply with the various requirements of the exemption, such as complying with security measures. Essentially, we seek to expand the sharing provision of the current exemption while leaving the other provisions intact. This is largely based on feedback we have received from those using the exemption and our understanding of how the regulation can be improved so that their desired noninfringing uses are no longer adversely affected by this limitation. You can find our expansion petition for the literary works TDM exemption here, and our expansion petition for the film TDM exemption here.

What’s Next?

The next step in the triennial rulemaking process is the Copyright Office issuing a notice of proposed rulemaking, where it will lay out its plan of action. While we do not have a set timeline for the notice of proposed rulemaking, during the last rulemaking cycle, it happened in mid-October—meaning it is reasonable to expect the Office to issue this notice in the next two months or so. Then, there will be several rounds of comments in support of or in opposition to the proposals. Finally, the Office will issue a final recommendation, and the Librarian of Congress will issue a final rule. While the Librarian of Congress is not legally obligated to adopt the Copyright Office’s recommendations, they traditionally do. Based on last year’s cycle, we can expect a final rule to be issued around October 2024. So we are in for a long wait and a lot of work! We will keep our readers updated as the rulemaking moves forward.

Copyright and Generative AI: Our Views Today

Posted August 30, 2023
Large copyright sign made of jigsaw puzzle pieces” by Horia Varlan is licensed under CC BY 2.0.

Authors Alliance readers will surely have noticed that we have been writing a lot about generative AI and copyright lately. Since the Copyright Office issued its decision letter on copyright registration in a graphic novel that included AI-generated images a few months back, many in the copyright community and beyond have struggled with the open questions around generative AI and copyright.

The Copyright Office has launched an initiative to study generative AI and copyright, and today issued a notice of inquiry to solicit input on the issues involved. The Senate Judiciary Committee has also held multiple hearings on IP rights in AI-generated works, including one last month focused on copyright. And of course there are numerous lawsuits pending over its legality, based on theories ranging from copyright infringement to to privacy to defamation. It’s also clear that there is little agreement about a one-size-fits-all rule for AI-generated works that applies across industries. 

At Authors Alliance, we care deeply about access to knowledge because it supports free inquiry and learning, and we are enthusiastic about ways that generative AI can meaningfully further those ideals. In addition to all the mundane but important efficiency gains generative AI can assist with, we’ve already seen authors incorporate generative AI into their creative processes to produce new works. We’ve also seen researchers incorporate these tools to help make new discoveries. There are some clear concerns about how generative AI tools, for example, can make it easier to engage in fraud and deception, as well as perpetuating disinformation. There have been many calls for legal regulation of generative AI technologies in recent months, and we wanted to share our views on the copyright questions generative AI poses, recognizing that this is a still-evolving set of questions.  

Copyright and AI

Copyright is at its core an economic regulation meant to provide incentives for creators to produce and disseminate new expressive works. Ultimately, its goal is to benefit the public by promoting the “progress of science,” as the U.S. Constitution puts it. Because of this, we think new technology should typically be judged by what it accomplishes with respect to those goals, and not by the incidental mechanical or technological means that it uses to achieve its ends. 

Within that context, we see generative AI as raising three separate and distinct legal questions. The first and perhaps most contentious is whether fair use should permit use of copyrighted works as training data for generative AI models. The second is how to treat generative AI outputs that are substantially similar to existing copyrighted works used as inputs for training data—in other words, how to navigate claims that generative AI outputs infringe copyright in existing works. The third question is whether copyright protection should apply to new outputs created by generative AI systems. It is important to consider these questions separately, and avoid the temptation to collapse them into a single inquiry, as different copyright principles are involved. In our view, existing law and precedent give us good answers to all three questions, though we know those answers may be unpalatable to different segments of a variety of content industries. 

Training Data and Fair Use

The first area of difficulty concerns the input stage of generative AI. Is the use of training data which includes copyrighted works a fair use, or does it infringe on a copyright owner’s exclusive rights in her work? The generative AI models used by companies like OpenAI, Stability AI, and Stable Diffusion are based on massive sets of training data. Much of the controversy around intellectual property and generative AI concerns the fact that these companies often do not seek permission from rights holders before training their models on works controlled by these rights holders (although some companies, like Adobe, are building generative AI models based on their own stock images, openly-licensed images, and public domain content). Furthermore, due to the size of the data sets and nature of their collection (often obtained via scraping websites), the companies that deploy these models do not make clear what works make up the training data. This question is one that is controversial and highly debated in the context of written works, images, and songs. Some creators and creator communities in these areas have made calls for “consent, credit, and compensation” when their works are included in training data. The obstacle to that point of view is, if the use of training data is a fair use, none of this is required, at least not by copyright.  

We believe that the use of copyrighted works as training data for generative AI tools should generally be considered fair use. We base this view on our reading of numerous fair use precedents including Google Books and HathiTrust cases as well others such as iParadigms. These and other cases support the idea that fair use allows for copying for non-expressive uses—copying done as an “intermediate step” in producing non-infringing content, such as by extracting non-expressive content such as patterns, facts, and data in or about the work. The notion that non-expressive (also called “non-consumptive”) uses do not infringe copyrights is based in large part on a foundational principle in copyright law: copyright protection does not extend to facts or ideas. If it did, copyright law would run the risk of limiting free expression and inhibiting the progress of knowledge rather than furthering it. Using in-copyright works to create a tool or model with a new and different purpose from the works themselves, which does not compete with those works in any meaningful way, is a prototypical fair use. Like the Google Books project (as well as text data mining), generative AI models use data (like copyrighted works) to produce information about the works they ingest, including abstractions and metadata, rather than replicating expressive text. 

In addition, fair use of copyrighted works as training data for generative AI has several practical implications for the public utility of these tools. For example, without it, AI could be trained on only “safe materials,” like public domain works or materials specifically authorized for such use. Models already contain certain filters—often excluding hateful content or pornography as part of its training set. However, a more general limit on copyrighted content—virtually all creative content published in the last one hundred years—would tend to amplify bias and the views of an unrepresentative set of creators. 

Generative AI Outputs and Copyright Infringement

The feature that most distinguishes generative AI from technology in copyright cases that preceded it, such as Google Books and HathiTrust, is that generative AI not only ingests copyrighted works for the purpose of extracting data for analysis or search functionality, but for using this extracted data to produce new content. Can content produced by a generative AI tool infringe on existing copyrights?

Some have argued that the use of training data in this context is not a fair use, and is not truly a “non-expressive use” because generative AI tools produce new works based on data from originals and because these new works could in theory serve as market competitors for works they are trained on. While it is a fair point that generative AI is markedly different from those earlier technologies because of these outputs, the point also conflates the question of inputs and outputs. In our view, e using copyrighted works as inputs to develop a generative AI tool is generally not infringement, but this does not mean that the tool’s outputs can’t infringe existing copyrights. 

We believe that while inputs as training data is largely justifiable as fair use, it is entirely possible that certain outputs may cross the line into infringement. In some cases, a generative AI tool can fall into the trap of memorizing inputs such that it produces outputs that are essentially identical to a given input. While evidence to date indicates that memorization is rare, it does exist

So how should copyright law address outputs that are essentially memorized copies of inputs? We think the law already has the tools it needs to address this. Where fair use does not apply, copyright’s “substantial similarity” doctrine is equipped to handle the question of whether a given output is similar enough to an input to be infringing. The substantial similarity doctrine is appropriately focused on protection of creative expression while also providing room for creative new uses that draw on unprotectable facts or ideas. Substantial similarity is nothing new: it has been a part of copyright infringement analysis for decades, and is used by federal courts across the country. And it may well be that standards, such as a set of  “Best Practices for Copyright Safety for Generative AI” proposed by law professor Matthew Sag, will become an important measure of assessing whether companies offering generative AI have done enough to guard against the risk of their tools producing infringing outputs.

Copyright Protection of AI Outputs

A third major question is, what exactly is the copyright status of the outputs of generative AI programs: are they protected by copyright at all, and if so, who owns those copyrights? Under the Copyright Office’s recent registration guidance, the answer seems to be that there is no copyright protection in the outputs. This does not sit well with some generative AI companies or many creators who rely on generative AI programs in their own creative work. 

We generally agree with the Copyright Office’s recent guidance concerning the copyright status of AI-generated works, and believe that they are unprotected by copyright. This is based on the simple but enduring “human authorship” requirement in copyright law, which dates back to the late 19th century. In order to be protected by copyright, a work must be the product of a human author and contain a modicum of human creativity. Purely mechanical processes that occur without meaningful human creative input cannot generate copyrightable works. The Office has categorized generative AI models as this kind of mechanical tool: the output responds to the human prompt, but the human making the prompt does not have sufficient control over how the model works to make them an “author” of the output for the purposes of copyright law. The district court for D.C. recently issued a decision agreeing with this take in Thaler v. Perlmutter, a case that challenged the human authorship requirement in the context of generative AI. 

It’s interesting to note here that in the Copyright Office listening session on text-based works, participants nearly universally agreed that outputs should not be protected by copyright, agreeing with the Copyright Office’s guidance. Yet the other listening sessions had more of a diversity of views. In particular, the participants in the listening sessions on audiovisual works and sound recordings were concerned about this issue. In industries like the music and film industries, where earlier iterations of generative AI tools have long been popular (or are even industry norms), the prospect of being denied copyright protection in songs or films, simply due to the tools used, can understandably be terrifying for creators who want to make a profit from their works. On this front, we’re sympathetic. Creators who rely on their copyrights to defend and monetize their works should be permitted to use generative AI as a creative tool without losing that protection. While we believe that the human authorship requirement is sound, it would be helpful to have more clarity on the status of works that incorporate generative AI content. How much additional human creativity is needed to render an AI-generated work a work of human authorship, and how much can a creator use a generative AI tool as part of their creative process without foregoing copyright protection in the work they produce? The Copyright Office seems to be grappling with these questions as well and seeking to provide additional guidance, such as in a recent webinar with more in-depth registration guidance for creators relying on generative AI tools in their creative efforts.

Other Information Policy Issues Affecting Authors

Generative AI has generated questions in other areas of information policy beyond the copyright questions we discuss above. Fraudulent content or disinformation, the harm caused by deep fakes and soundalikes, defamation, and privacy violations are serious problems that ought to be addressed. Those uses do nothing to further learning, and actually pollute public discourse rather than enhance it. They can also cause real monetary and reputational harm to authors. 

In some cases, these issues can be addressed by information policy doctrines outside of copyright, and in others, they can be best handled by regulations or technical standards addressing development and use of generative AI models. A sound application of state laws such as defamation law, right of publicity laws, and various privacy torts could go a long way towards mitigating these harms. Some have proposed that the U.S. implement new legislation to enact a federal right of publicity. This would represent a major change in law and the details of such a proposal would be important. Right now, we are not convinced that this would serve creators better than the existing state laws governing the right of publicity. While it may take some time for courts to figure out how to adapt legal regimes outside of copyright to questions around generative AI, adapting the law to new technologies is nothing new. Other proposals call for regulations like labeling AI-generated content, which could also be reasonable as a tool to combat disinformation and fraudulent content. 

In other cases, creators’ interests could be protected through direct regulation of the development and use of generative AI models. For example, certain creators’ desire for consent, credit, and compensation when their works are included in training data sets for generative AI programs is an issue that could be perhaps addressed through regulation of AI models. As for consent, some have called for an opt-out system where creators could have their works removed from the training data, or the deployment of a “do not train” tag similar to the robots.txt “do not crawl” tag. As we explain above, under the view that training data is generally a fair use, this is not required by copyright law. But the views that using copyrighted training data without some sort of recognition of the original creator is unfair, which many hold, may support arguments for other regulatory or technical approaches that would encourage attribution and pathways for distributing new revenue streams to creators. 

Similarly, some have called for collective licensing legislation for copyrighted content used to train generative AI models, potentially as an amendment to the Copyright Act itself. We believe that this would not serve the creators it is designed to protect and we strongly oppose it. In addition to conflicting with the fundamental principles of fair use and copyright policy that have made the U.S. a leader in innovation and creativity, collective licensing at this scale would be logistically infeasible and ripe for abuse, and would tend to enrich established, mostly large rights holders while leaving out newer entrants. Similar efforts several years ago were proposed and rejected in the context of mass digitization based on similar concerns.  

Generative AI and Copyright Going Forward

What is clear is that the copyright framework for AI-generated works is still evolving, and just about everyone can agree on that. Like many individuals and organizations, our views may well shift as we learn more about the real-world impacts of generative AI on creative communities and industries. It’s likely that as these policy discussions continue to move forward and policymakers, advocacy groups, and the public alike grapple with the open questions involved, the answers to these open questions will continue to develop. Changes in generative AI technology and the models involved may also influence these conversations. Today, the Copyright Office published issued a notice of inquiry on the topic of copyright in AI-generated works. We plan to submit a comment sharing our perspective, and are eager to learn about the diversity of views on this important issue.

Copyright Protection in AI-Generated Works Update: Decision in Thaler v. Perlmutter

Posted August 24, 2023
Photo by Google DeepMind on Unsplash

Last week, the District Court for the District of Columbia announced a decision in Thaler v. Perlmutter, a case challenging copyright’s human authorship requirement in the context of a work produced by a generative AI program. This case is one of many lawsuits surrounding copyright issues in generative AI, and surely will not be the last we hear about the copyrightability of AI-generated works, and how this interacts with copyright’s human authorship requirement. In today’s post, we’ll provide a quick summary of the case and offer our thoughts about what this means for authors and other creators.


Back in 2018 (before the current public debate about copyright and generative AI had reached the fever pitch we now see today), Dr. Stephen Thaler applied for copyright registration in a work of visual art produced by a generative AI system he created, called the Creativity Machine. Thaler sought to register his work as a computer-generated “work-made-for-hire” since he created the machine, which “autonomously” produced the work. After a lot of back and forth with the Copyright Office, it maintained its denial of the application, explaining that the human authorship requirement in copyright law foreclosed protection for the AI-generated work, since it was not the product of a human’s creativity.

Then, Thaler then sued Shira Perlmutter, the Register of Copyrights, in the D.C. district court, asking the court to decide “whether a work autonomously generated by an AI system is copyrightable.” Judge Baryl A. Howell upheld the Copyright Office’s decision, explaining that under the plain language of the Copyright Act, “an original work of authorship” required that the author be a human “based on centuries of settled understanding” and a dictionary definition of “author.” She also cited to the U.S. Constitution’s IP clause, which similarly mentions “authors and inventors,” and over a century of Supreme Court precedent to support this principle.

Thaler’s attorney has indicated that he will be appealing the ruling to the D.C. Circuit court of appeals, and it remains to be seen whether that court will affirm the ruling. 

Implications for copyright law

The headline takeaway from this ruling is that AI generated art is not copyrightable because copyright requires human authorship, which remains a requirement in copyright law. However, the ruling is actually more nuanced and contains a few subtle points worth highlighting. 

For one, this case tested not just the human authorship requirement but also the application of the work-for-hire doctrine in the context of generative AI. On one view of the issues, if Thaler created a machine capable of creating a work that would be copyrightable were it created by a human, there is a certain appeal in framing the work as one commissioned by Thaler. On this point, the court explained that since there was no copyright in the work in the first instance based on its failure to meet the human authorship requirement, this theory also did not hold water. In other words, a work-made-for-hire requires that the “hired” creator also be a human. 

It’s important to keep in mind that Thaler was in a sense testing the reach of the limited or “thin” copyright that can be granted in compilations of AI-generated work, or AI-generated work that a human has altered, thus endowing it with at least a modicum of human creativity as copyright requires. Thaler made no changes to the image produced by his Creativity Machine, and in fact, described the process to the Copyright Office as fully autonomous rather than responding to an original prompt (as is generally the case with generative AI). Thaler was not trying to get a copyright in the work in order to monetize it for his own livelihood, but—presumably—to explore the contours of copyright in computer-generated works. In other words, the case has some philosophical underpinnings (and in fact, Thaler has said in interviews that he believes his AI inventions to be sentient, a view that many of us tend to reject). But for creators using generative AI who seek to register copyrights in order to benefit from copyright protection, things are unlikely to be quite so clear-cut. And while she found the outcome to be fairly clear cut in this case, Judge Howell observed:

“The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an ‘author’ of a generated work, the scope of the protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more.”

What does this all mean for authors? 

For authors who want to incorporate AI-generated text or images into their own work, the situation is a bit murkier than it was for Thaler. The case itself provides little in the way of information for human authors who use generative AI tools as part of their own creative processes. But while the Copyright Office’s registration guidance tells creators what they need to do to register their copyrights, this decision provides some insight about what will hold up in court. Courts can and do overturn agency actions in some cases (in this case, the judge could have overturned the Copyright Office’s denial of Thaler’s registration application had she found it to be “arbitrary and capricious”). So the Thaler case in many ways affirms what the Copyright Office has said so far about registrability of AI-generated works, indicating that the Office is on the right track as far as their approach to copyright in AI-generated works, at least for now. 

The Copyright Office has attempted to provide more detailed guidance on copyright in “AI-assisted” works, but a lot of confusion remains. One guideline the Office promulgated in a recent webinar on copyright registration in works containing AI-generated material is for would-be registrants to disclose the contribution of an AI system when its contribution is more than “de minimis,” i.e., when the AI-generated creation would be entitled to copyright protection if it were created by a human. This means that using an AI tool to sharpen an image doesn’t require disclosure, but using an AI tool to generate one part of an image does. An author will then receive copyright protection in only their contributions to the work and the changes they made to the AI-generated portions. As Thaler shows, an author must make some changes to an AI-generated work in order to receive any copyright protection at all in that work.

All of this means, broadly speaking, that the more an author changes an AI-generated work—such as by using tools like photoshop to alter an image or by editing AI-generated text—the more likely it is that the work will be copyrightable, and, by the same token, the less “thin” any copyright protection in the work will be. While there are open questions about how much creativity is required from a human in order to transform an AI-generated work into a copyrightable work of authorship, this case has underscored that at least some creativity is required—and using an AI tool that you yourself developed to create the work does not cut it. 

The way Thaler framed his Creativity Machine as the creator of the work in question also shows that it is important to avoid anthropomorphizing AI systems—just as the court rejected the notion of an AI-generated work being a work-made-for-hire, a creative work with both generative AI and human contributions probably could not be registered as a “co-authored” work. Humans are predisposed to attribute human characteristics to non-humans, like our pets or even our cars, a phenomenon which we have seen repeatedly in the context of chat bots. Regardless, it’s important to remember that a generative AI program is a tool based on a model. And thinking of generative AI programs as creators rather than tools can distract us from the established and undisturbed principle in copyright law that only a human can be considered an author, and only a human can hold a copyright. 

Update: Consent Judgment in Hachette v. Internet Archive

Posted August 11, 2023
Photo by Markus Winkler on Unsplash

UPDATE: On Monday, August 14th, Judge Koeltl issued an order on the proposed judgement, which you can read here, and which this blog post has been updated to reflect. In his order, the judge adopted the definition of “Covered Book” suggested by the Internet Archive, limiting the permanent injunction subject to an appeal to only those books published by the four publisher plaintiffs that are available in ebook form.

After months of deadline extensions, there is finally news in Hachette Books v. Internet Archive, the case about whether Controlled Digital Lending is a fair use, which we have been covering since its inception over two years ago, and in which Authors Alliance filed an amicus brief in support of Internet Archive and CDL. On Friday, August 11th, attorneys for the Internet Archive and a group of publishers filed documents in federal court proposing “an appropriate procedure to determine the judgment to be entered in this case,” as Judge John G. Koeltl of the Southern District of New York requested

In a letter to the court, both parties indicated that they had agreed to a permanent injunction, subject to an appeal by IA, “enjoining the Internet Archive [] from distributing the ‘Covered Books’ in, from or to the United States electronically.” This means that the Internet Archive has agreed to stop distributing within the U.S. the books in its CDL collection which are published by the plaintiff publishers in the case (Hachette Book Group, HarperCollins, Wiley, and Penguin Random House), and are currently available as ebooks from those publishers. The publishers must also send IA a catalog “identifying such commercially available titles (including any updates thereto in the Plaintiffs’ discretion), or other similar form of notification,” and “once 14 days have elapsed since the receipt of such notice[,]” IA will cease distributing CDL versions of these works under the proposed judgment.

Open Questions

Last week’s proposed judgment did leave an open question, which Judge Koeltl was asked to decide before issuing a final judgment: should IA be enjoined from distributing CDL versions of books published by the four publishers where those books are available in any form, or should it only be enjoined from distributing CDL versions of these books that are available as ebooks? This difference may seem subtle, but it’s actually really meaningful. 

The publishers asked for a broader definition, whereby any of their published works that remain in print in any form are off the table when it comes to CDL. The publishers explain in a separate letter to the court that they believe that it would be consistent with the judgment to ban the IA from loaning out CDL versions of any of the commercially available books they publish, whatever the format. They argue that it should be up to the publishers whether or not to issue an ebook edition of the work, and that even when they decide not to do so (based on an author’s wishes or other considerations), IA’s digitization and distribution of CDL scans is still infringement. 

On the other hand, the Internet Archive is asked the judge to confine the injunction to books published by the four publishers that are available as ebooks, leaving it free to distribute CDL scans of the publishers’ books that are in print, but only available as print and/or audio versions. It argues that to forbid it from lending out CDL versions of books with no ebook edition available would go beyond the matters at issue in the case—the judge did not decide whether it would be a fair use to loan out CDL versions of books only available in print, because none of the works that the publishers based the suit upon were available only as print editions. Furthermore, IA explains that other courts have found that the lack of availability of a competing substitute (in this case, an ebook edition) weighs in favor of fair use under the fourth factor, which considers market competition and market harm.

It seems to me that the latter position is much more sensible. In addition to CDL scans of books only available as physical books not being at issue in the case, the fair use argument for this type of lending is quite different. One of the main thrusts of the judge’s decision in the case was his argument that CDL scans compete with ebooks, since they are similar products, but this logic does not extend to competition between CDL scans and print books. This is because the markets for digital versions of books and analogue versions of books are quite different. Some readers strongly prefer print versions of books, and some turn to electronic editions for reasons of disability, physical distance from libraries or bookstores, or simple preference. While we believe that IA’s CDL program is a fair use, its case is even stronger when it comes to CDL loans of books that are not available electronically. 

Then on Monday, August 14th, Judge Koeltl issued an order and final judgment in the case, agreeing with the Internet Archive and limiting the injunction to books published by the four publishers which are available in ebook form. Again, this may seem minor, but I actually see it as a substantial win, at least for now. While even the more limited injunction is a serious blow to IA’s controlled digital lending program, it does allow them to continue to fill a gap in available electronic editions of works. The judge’s primary reasoning was that books not available as ebooks was beyond the scope of what was at issue in the case, but he also mentioned that factor four analysis could have been different were there no ebook edition available.

Limitations of the Proposed Judgment

Importantly, the parties also stipulated that this injunction is subject to an appeal by the Internet Archive. This means that if the Internet Archive appeals the judgment (which it has indicated that it plans to do), and the appeals court overturns Judge Koeltl’s decision, for example by finding that its CDL program is a fair use, IA may be able to resume lending out those CDL versions of books published by the plaintiffs which are also available as ebooks. The agreement also does not mean that IA has to end its CDL program entirely—neither books published by other publishers nor books published by the publisher plaintiffs that are not available as ebooks are covered under the judge’s order.  

What’s Next?

The filing represents the first step towards the Internet Archive appealing the court’s judgment. As we’ve said before, Authors Alliance plans to write another amicus brief in support of the Internet Archive’s argument that Controlled Digital Lending is a fair use. Now that the judge has issued his final judgment, IA has 30 days to file a “notice of appeal” with the district court. Then, the case will receive a new docket in the Second Circuit Court of Appeals, and the various calendar and filing processes will begin anew under the rules of that court. We will of course keep our readers apprised of further developments in this case.

The Anti-Ownership Ebook Economy

Posted July 25, 2023
The Anti-Ownership Ebook Economy

Earlier this month, the Engelberg Center on Innovation Law and Policy at NYU Law released a groundbreaking new report: The Anti-Ownership Ebook Economy: How Publishers and Platforms Have Reshaped the Way We Read in the Digital Age is a detailed report that traces the history of ebooks and, through a series of interviews with publishers, platforms, librarians, and others, explains how the law and the markets have converged to produce the dysfunction we see today in the ebook marketplace.

The report focuses especially closely on the role of platform companies, such as Amazon, Apple and OverDrive, which now play an enormous role in controlling how readers interact with ebooks. “Just as platforms control our tweets, our updates, and the images that we upload, platforms can also control the books we buy, keeping tabs on how, when, and where we use them, and at times, modifying or even deleting their content at will.” 

Claire Woodcock

Last Friday, I spoke with one of the authors, Claire Woodcock, to learn a little bit more about the project and its goals: 

Q: What was your motivation to work on this project? 

A: My co-authors, Michael Weinberg, Jason Schultz, and Sarah Lamdan had all been working on this for well over a year [before] I joined. I knew Sarah from another story I’d written about an ebook platform that was prioritizing the platforming of disinformation last year, and she had approached me about this project. When I hopped on a call with the three of them, I believe it was Michael who posed the core question of this project: “Why can we not own, but only license ebooks?” 

I’ve thought about that question ever since. So my role in joining the project was to help talk to as many people as we could – publishers, librarians, platforms, and other stakeholders to try to understand why not. It seems like a simple question but there are so many convoluted reasons and we wanted to try to distill this down. 

Q: Many different people were interviewed for this project. Tell me about how that went. 

A: There was actually some hesitation to talk; I think a reason why was almost extreme fear of retaliation. So, it took a while to crack into learning about some of the different areas, especially with some publishers and platforms. I wish there was more of a willingness to engage on the part of some publishers, who would flat out tell me things like they weren’t authorized to talk about their company’s internal practices , or from platforms like OverDrive, who we sent our list of questions over to and never heard from again (until I ran into Steve Potash at the American Library Association’s Annual Conference). I’d have loved to hear more from them directly when I was actively conducting interviews.

Q: I noticed there weren’t many interviews with authors. Can you say why not? 

A: Authors weren’t as big of a focus because we realized, particularly in talking with several literary agents, that from a business and legal perspective authors don’t have much of a say in how their books are distributed. Contractually, they aren’t involved in downstream use. I think it would be really interesting to do a follow up with authors to get their perspective on how their books are licensed or sold online.

Q: The report contains a number of conclusions and recommendations. Which among them are your favorite? 

A: One of the most striking things I learned, and what stuck out to me the most when I went back and listened to the interviews, is the importance of market consolidation and lack of competition. OverDrive has roughly 95% of the ebook marketplace for libraries (and I know it’s different for academic publishing, for sure). The lack of competition in our society, especially in this area, makes it hard to speak up and speak out when a certain stakeholder has issues with the dominant market players. Because of that, looking at each of the groups of stakeholder types we spoke with, each could point to other groups causing the problem (it reminds me of the spiderman meme) and there are platforms and other publishers, mostly smaller, who want to make this work but the major players are not doing that. It also stuck out that, almost everyone we talked to talks about librarians as partners, but when we talk to the librarians, they say “they think we are partners, but we don’t feel like we have a seat at the table, decisions that impact us are often made without consulting us in a way that is transparent.” 

Q: If you could do a follow up study, what additional big questions would you focus on? 

A: Lots of people talked about audiobooks. We were focused on ebooks, but the audiobook market is even more concentrated, and lots of people raised the issue that ebooks are only part of the issue. There is a version of this that is happening with audiobooks as well. I also think that the intersections of this market with television, platform streaming, and even other consumer goods like toys and other parts of the market are really interesting. What we’re seeing here, it’s a version of what’s happening in other creative industries. 

I also think it would be worth learning more about how libraries and others are working around the current issues. For example, lots of libraries ask for perpetual licenses, since they’re looking at working within the current context and looking at contracts so they can get assurances, for example if something happens to the publishers platform, the library could still get some assurances that even if something happened to the company, the license agreement could still be honored. But are those efforts actually effective? And, given the importance of licensing, it might also be interesting to explore how libraries are resourced to negotiate those agreements – for example, training and staff to negotiate. I think if libraries were better funded they would probably be able to better handle these challenges. 

Ninth Circuit Issues Decision in Hunley v. Instagram

Posted July 19, 2023
Photo by Alexander Shatov on Unsplash

On Monday, the Ninth Circuit issued a decision in Hunley v. Instagram, a case about whether Instagram (and platforms like it) can be held liable for secondary infringement based on its embedding feature, whereby websites employ code to display an Instagram post on their sites within their own content. We are delighted to announce that the court ruled in favor of Instagram, reinforcing important copyright principles which allow authors and other creators to link to and embed third-party content, enriching their writing in the process. 

Our Brief

Authors Alliance signed on to an amicus brief in this case, arguing that Instagram should not be held liable for contributory infringement for its embedding feature. We explained that Instagram was not liable under a precedential legal test established in Perfect 10 v. Amazon, and moreover that a ruling to the contrary could place our ability to link to other online content (which is analogous to embedding in many ways) at risk for legal liability. 

Narrowing the Perfect 10 test—which establishes that a website does not infringe when it does not store a copy of the relevant work on its server—would have struck a blow to how we share and engage with online content. Linking to other information allows authors to easily cite to other information without disrupting the flow of their writing. By the same token, it allows internet users to verify information and learn more about topics of interest, all with the click of a button. We are pleased that the court ruled in favor of Instagram, declining to revisit the Perfect 10 test and holding that it foreclosed relief for the photographers that had filed the lawsuit. In so doing, the court has helped maintain a vibrant internet where all can share and engage with knowledge and creative expression.

The Decision

The case concerned a group of photographers whose instagram posts were embedded into content by several media outlets. The photographers then sued Instagram in the Northern District of California, on the theory that by offering the “embedding” feature, it was facilitating copyright infringement of others and therefore was liable. The district court found that Perfect 10 applied to the case, and therefore that Instagram was not liable for infringement for the outlets’ display of the posts. 

The Ninth Circuit agreed, and furthermore declined to revisit or narrow the Perfect 10 case for a number of reasons—it rejected the argument that the search engines at issue in the Perfect 10 case itself were somehow different from social media platforms, and affirmed that Perfect 10 was consistent with more recent Supreme Court case law. The court also cited with approval our argument that embedding and in-line linking have paved the way for innovation and creativity online, though did not adopt the justification, reasoning that it is not a court’s job to serve as a policymaker. In applying the Perfect 10 test, the court explained that Instagram did not infringe the photographers’ copyrights, and where there is no direct infringement, there cannot be related secondary infringement. Instagram displayed a copy of the relevant photographs on its platform, which users permit via a license they agree to by using the platform. But it did not facilitate the images’ display elsewhere, because the computer code used by the media platforms that embedded the instagram posts did not make a copy of the posts, but rather formatted and displayed them. 

Copyright Office Hosts Listening Session on Copyright in AI-Generated Audiovisual Works

Posted June 26, 2023
Photo by Jon Tyson on Unsplash

On May 17, the Copyright Office held a listening session on the topic of copyright issues in AI-generated audiovisual works. You may remember that we’ve covered the other listening sessions convened by the Office on visual arts, musical works, and textual works (in which we also participated). In today’s post, we’ll summarize and discuss the audiovisual works listening session and offer some insights on the conversation.

Participants in the audiovisual works listening session included AI developers in the audiovisual space such as Roblox and Hidden Door; trade groups and professional organizations including the Motion Picture Association, Writers Guild of America West, and National Association of Broadcasters; and individual filmmakers and game developers. 

Generative AI Tools in Films and Video Games

As was the case in the music listening session, multiple participants indicated that generative AI is already being used in film production. The representative from the Motion Picture Association (MPA) explained that “innovative studios” are already using generative AI in both the production and post-production processes. As with other creative industries, generative AI tools can support filmmakers by increasing the efficiency of various tasks that are part of the filmmaking process. For example, routine tasks like color correction and blurring or sharpening particular frames are made much simpler and quicker through the use of AI tools. Other participants discussed the ways in which generative AI can help with ideation, overcoming “creativity blocks,” eliminating some of the drudgery of filmmaking, enhancing visual effects, and lowering barriers to entry for would-be filmmakers without the resources of more established players. These examples are analogous to the various ways that generative AI can support authors, which Authors Alliance and others have discussed, like brainstorming, developing characters, and generating ideas for new works.

The representative from the MPA also emphasized the potential for AI tools to “enhance the viewer experience” by making visual effects more dramatic, and in the longer term, possibly enable much deeper experiences like having conversations with fictional characters from films. The representative from Hidden Door—a company that builds “online social role-playing games for groups of people to come together and tell stories together”—similarly spoke of new ways for audiences to engage with creators, such as by creating a sort of fan fiction world with the use of generative AI tools, with contributions from the author, the user, and the generative AI system. And in fact, this can create “new economic opportunities” for authors, who can monetize their content in new and innovative ways. 

Video games are similarly already incorporating generative AI. In fact, generative AI’s antecedents, such as “procedural content generation” and “rule-based systems” have been used in video games since their inception. 

Centering Human Creators

Throughout the listening session, participants emphasized the role of human filmmakers and game developers in creating works involving AI-generated elements, stating or implying that creators who use generative AI should own copyrights in the works they produce using these tools. The representative from Roblox, an online gaming platform that allows users to program games and play other users’ games, emphasized that AI-generated content is effective and engaging because of the human creativity inherent in “select[ing] the best output” and making other creative decisions. A representative from Inworld AI, a developer platform for AI characters, echoed this idea, explaining that these tools do not exist in isolation, but are productive only when a human uses them and makes creative choices about their use, akin to the use of a simpler tool like a camera or paintbrush. 

A concern expressed by several participants—including the Writers Guild of America West, National Broadcasters Association, and Directors Guild—is that works created using generative AI tools could devalue works created by humans without such tools. The idea of markets being “oversaturated” with competing audiovisual works raises the possibility that individual creators could be crowded out. While this is far from certain, it reflects increasing concerns over threats to creators’ economic livelihoods when AI-generated works compete alongside theirs. 

Training Data and Fair Use

On the question of whether using copyrighted training materials to train generative AI systems is a fair use, there was disagreement among participants. The representative from the Presentation Guild likened the use of copyrighted training data without permission to “entire works . . . being stolen outright.” They further said that fair use does not allow this type of use due to the commercial nature of the generative AI companies, the creative nature of the works used to train the systems (though it is worth noting that factual works, and others entitled only “thin” copyright protection, are also use to train these tools), and because by “wrest[ing] from the creator ownership and control of their own work[,]” the market value for those works is harmed . This is not, in my view, an accurate statement of how the market effects factor in fair use works, because unauthorized uses that are also fair always wrest some control from the author—this is part of copyright’s balance between an author’s rights and permitting onward fair uses. 

The representative from the Writers Guild of America (“WGA”) West—which is currently on strike over, among other things, the role of generative AI in television writing—had some interesting things to say about the use of copyrighted works as training data for generative AI systems. In contract negotiations, WGA had proposed a contract which “would . . . prohibit companies from using material written under the Guild’s agreement to train AI programs for the purpose of creating other derivative and potentially infringing works.” The companies refused to acquiesce, arguing that “the technology is new and they’re not inclined to limit their ability to use this new technology in the future.” The companies’ positions are somewhat similar to those expressed by us and many others—that while generative AI remains in its nascency, it is sensible to allow it to continue to develop before instituting new laws and regulations. But it does show the tension between this idea and creators who feel that their livelihoods may be threatened by generative AI’s potential to create works with less input from human authors. 

Other participants, such as the representative from Storyblock, a stock video licensing company, emphasized their belief that creators of the works used to train generative AI tools should be required to consent, and should receive compensation and credit for the use of their works to train these models. The so-called “three C’s” idea has gained traction in recent months. In my view, the use of training data is a fair use, making these requirements unnecessary from a copyright perspective, but it represents an increasingly prevailing view among rightsholders and licensing groups (including the WGA, motivating its ongoing strike in some respects) when it comes to making the use of generative AI tools more ethical. 

Adequacy of Registration Guidance

Several participants expressed concerns about the Copyright Office’s recent registration guidance regarding works containing AI-generated materials, and specifically how to draw the line between human-authored and AI-generated works when generative AI tools are used as part of a human’s creative process. The MPA representative explained that the guidance does not account for the subtle ways in which generative AI tools are used as part of the filmmaking process, where it often works as a component of various editing and production tools. The MPA representative argued that using these kinds of tools shouldn’t make parts of films unprotected by copyright or trigger a need to disclose minor uses of such tools in copyright registration applications. The representative from Roblox echoed these concerns, noting that when a video game involves thousands of lines of code, it would be difficult for a developer to disclaim copyright in certain lines of code that were AI-generated. 

A game developer and professor expressed her view that in the realm of video games, we are approaching a reality where generative AI is “so integrated into a lot of consumer-grade tools that people are going to find it impossible to be able to disclose AI usage.” If creators or users do not even realize they are using generative AI when they use various creative digital tools, the Office’s requirement that registrant’s disclose their use of generative AI in copyright registration applications will be difficult if not impossible to follow.

The JCPA, Again

Posted June 15, 2023
Photo by AbsolutVision on Unsplash

For those of you following along, you’ve seen the numerous posts we’ve made about the Journalism Competition and Preservation Act, e.g., here, here, and here. The bill, which neither supports competition nor preservation of journalism, does have a really compelling story. Its apparent goal is to bolster local newsrooms and journalists by making it easier for them to negotiate with companies like Google or Meta (which links to news content), adding revenue to help aid in their operations. 

Today’s update is that the JCPA is a little closer to becoming law, with the Senate Judiciary Committee voting to move the bill forward on a 14-7 vote. We again joined a group of more than two dozen civil society organizations in opposing the bill in this letter led by Public Knowledge. We also joined a large group of organizations opposing a very similar bill that was introduced earlier this year in California, with similar aims. 

While the bill has some wonderful goals, it seems destined to fail at achieving them, while doing real damage to the broader online information ecosystem. As we’ve detailed before, the JCPA seems to create a pseudo-copyright regime in which platforms would have to pay for linking to news, which is a radical change in how the internet functions. It also includes provisions that would effectively force social media platforms to carry certain news outlet coverage, even when a platform disagrees with the views that those news outlets express, thus undermining Section 230 protections for platforms that want to remove false or misleading content from their websites. 

For the actual competition issues, the bill has also been contorted so that its aims–competition and support for small news outlets–have been co-opted by the biggest commercial publishers. For example, the bill’s supporters say it doesn’t benefit the biggest news outlets, but its cap of 1,500 employees would exclude a grand total of *3* of the largest newspapers in the US, while the JCPA’s minimum threshold of $100,000 in revenue  would leave out the smallest, most vulnerable newsrooms. Further, that numerical cap also doesn’t apply to broadcasters at all, which means it actually favors companies like News Corp., Sinclair, iHeartRadio, and NBCU. 

The Senate Judiciary Committee markup earlier today (you can watch the recording here) was relatively tame, but it was clear that there was very little agreement about what the bill would actually accomplish, or what its unintended consequences might be. The recurring theme throughout was that something must be done to protect and support journalism and that it is unfair that big tech companies are reaping incredible profits while small news publishers are getting very little of the financial pie and are struggling to survive. While we agree with both of these propositions, unfortunately, the JCPA seems uniquely ineffective at fixing the problem. 

Supreme Court Announces Decision in Jack Daniel’s v. VIP Products

Posted June 8, 2023
Photo by Chris Jarvis on Unsplash

Today, the Supreme Court handed down a decision in Jack Daniel’s v. VIP Products, a trademark case about the right to parody popular brands, for which Authors Alliance submitted an amicus brief, supported by the Harvard Cyberlaw clinic. In a unanimous decision, the Court vacated and remanded the Ninth Circuit’s decision, overturning the decision asking the lower courts to re-hear the case with a new, albeit it very narrow, principle announced by the Court: special First Amendment review is not appropriate in cases where one brand’s trademark is used in another, even when used as a parody. In addition to the majority opinion delivered by Justice Kagan, there were two concurring opinions by Justice Sotomayor and Justice Gorsuch, each joined by other justices. 

Case Background

The case concerns a dog toy that parodies Jack Daniel’s famous Tennessee Whiskey bottle, using some of the familiar features from the bottle, and bearing the label “Bad Spaniels.” After discovering the dog toy, Jack Daniel’s requested that VIP cease selling the toys. VIP Products refused, then proceeded to file a trademark suit, asking for a declaratory judgment that its toy “neither infringed nor diluted Jack Daniel’s trademarks.” Jack Daniel’s then countersued to enforce its trademark, arguing that the Bad Spaniels toy infringed its trademark and diluted its brand. We became interested in the case because of its implications for creators of all sorts (beyond companies making humorous parody products). 

As we explain in our amicus brief, authors rely on their ability to use popular brands in their works. For example, fiction authors might send their characters to real-life colleges and universities, set scenes where characters dine at real-life restaurant chains, and use other cultural touchstones to enrich their works and ultimately, to express themselves. While the case is about trademark, the First Amendment looms large in the background. A creator’s right to parody brands, stories, and other cultural objects is an important part of our First Amendment rights, and is particularly important for authors. 

Trademark law is about protecting consumers from being confused as to the source of the goods and services they purchase. But it is important that trademark law be enforced consistent with the First Amendment and its guarantees of free expression. And importantly, trademark litigation is hugely expensive, often involving costly consumer surveys and multiple rounds of hearings and appeals. We are concerned that even the threat of litigation could create a chilling effect on authors, who might sensibly decide not to use popular brands in their works based on the possibility of being sued. 

In our brief, we suggested that the Court implement a framework like the one established by the Second Circuit in Rogers v. Grimaldi, “a threshold test . . . designed to protect First Amendment interests in the trademark context.” Under Rogers, in cases of creative expressive works, trademark infringement should only come into play “where the public interest in avoiding consumer confusion outweighs the public interest in free expression.” It establishes that trademark law should only be applied where the use “has no artistic relevance to the underlying work whatsoever, or, if it has some artistic relevance, unless the [second work] explicitly misleads as to the source or the content of the work.”

The Supreme Court’s Decision

In today’s decision, the Court held that “[w]hen an alleged infringer uses a trademark as a designation of source for the infringer’s own goods, the Rogers test does not apply.” Without directly taking a position on the viability of the Rogers test, the Court found that, in this circumstance, where it believed that VIP Products used Jack Daniel’s trademarks for “source identifiers,” the test was inapplicable. It held that the Rogers test is not appropriate when the accused infringer has used a trademark to designate the source of its own goods—in other words, has used a trademark as a trademark.” The fact that the dog toy had “expressive content” did not disturb this conclusion. 

Describing Rogers as a noncommercial exclusion, the Court said that VIP’s use was commercial, as it was on a dog toy available for sale (i.e. a commercial product). Further supporting this conclusion, the Court pointed to the fact that VIP Products had registered a trademark in “Bad Spaniels.” It found that the Ninth Circuit’s interpretation of the “noncommercial use exception” was overly broad, noting that the Rogers case itself concerned a film, an expressive work entitled to the highest First Amendment protection, and vacating the lower court’s decision. 

The Court instead directed the lower court to consider a different inquiry, whether consumers will be confused as to whether Bad Spaniels is associated with Jack Daniel’s, rather than focusing on the expressive elements of the Bad Spaniels toy. But the Court also explained that “a trademark’s expressive message—particularly a parodic one, as VIP asserts—may properly figure in assessing the likelihood of confusion.” In other words, the fact that the Bad Spaniels toy is (at least in our view) a clear parody of Jack Daniel’s may make it more likely that consumers are not confused into thinking that Jack Daniel’s is associated with the toy. In her concurrence, Justice Sotomayor underscored this point by cautioning lower courts against relying too heavily on survey evidence when deciding whether consumers are confused “in the context of parodies and potentially other uses implicating First Amendment concerns.” In so doing, Justice Sotomayor emphasized the importance of parody as a form of First Amendment expression. 

The Court’s decision is quite narrow. It does not disapprove of the Rogers test in other contexts, such as when a trademark is used in an expressive work, and as such, it is unlikely to have a large impact on authors using brands and marks in their books and other creative expression. Lower courts across the country that do use the Rogers test may continue to do so under VIP Products.However, Justice Gorsuch’s concurrence does express some skepticism about the Rogers test and its applications, cautioning lower courts to handle the test with care. However, as a concurrence, this opinion has much less precedential effect than the majority’s. 

Remaining Questions

All of this being said, the Court does not explain why First Amendment scrutiny should not apply in this case, but merely reiterates that Rogers as a doctrine is and has always been “cabined,” with  “the Rogers test [applying] only to cases involving ‘non-trademark uses[.]’” The Court relies on that history and precedent rather than explaining the reasoning. Nor does the Court discuss the relevance of the commercial/noncommercial use distinction when it comes to the role of the First Amendment in trademark law. In our view, the Bad Spaniels toy did contain some highly expressive elements and functioned as a parody, so this omission is significant. And it may create some confusion for closer cases—at one point, Justice Kagan explains that “the likelihood-of-confusion inquiry does enough work to account for the interest in free expression,” “except, potentially, in rare situations.” We are left to wonder what those are. She further notes that the Court’s narrow decision “does not decide how far the ‘noncommercial use exclusion’ goes.” This may leave lower courts without sufficient guidance as to the reach of the noncommercial use exclusion from trademark liability and what “rare situations” merit application of the Rogers test to commercial or quasi-commercial uses.