Author Archives: Authors Alliance

Publishers’ reply brief in Hachette v. Internet Archive: First Impressions

Posted March 15, 2024

Dave Hansen and Kyle Courtney jointly authored this post. They are also the authors of a White Paper on Controlled Digital Lending of Library Books. We are not, as the Publishers claim in their brief on page 13, a “cadre of boosters.” We wrote the paper independently as part of our combined decades of work on libraries and access to knowledge.

Earlier today the publishers (Hachette, Harper Collins, John Wiley, and Penguin Random House) filed their reply brief on appeal in their long-running lawsuit against Internet Archive, which challenges (among other things) the practice of controlled digital lending. 

For the months after the decision, we had been observing all the hot takes, cheers, jeers, and awkward declarations about the case, the Internet Archive itself, and Controlled Digital Lending (CDL).

This post is not part of that fanfare. Here, we want to identify a few critical issues that the publishers focus on in their brief, including some questionable fair use analysis that they repeat from the district court below. Much of the brief is framed in heated rhetoric that may cause alarm, but much like publishers’ announcements about interlibrary loan, e-reserves, or document delivery, we believe controlled digital lending is here to stay, regardless of the lower court’s poor copyright analysis and current publisher’s brief.

Framing the Question

As is often the case, the parties disagree on what this case is actually about. For its part, Internet Archive says in their “Statement of the Issue on Appeal” that the question is  “whether Internet Archive’s controlled digital lending is fair use.” Publishers, on the other hand, reframe the question more broadly, which in combination with their arguments through the brief,  seems intended to not just kill IA’s implementation of controlled digital lending, but to encourage the court to rule in a way that would call into question all other library applications of CDL.. They say that the question is  “whether IA’s infringement of the Publishers’ Works is fair use based on IA’s CDL theories and practices.” 

This litigation, coordinated by the AAP,  seems to us an attempt to undermine what libraries have done for centuries: lend the books that they already lawfully own. Ironically, the opposition calls CDL a made-up theory created by a “cadre of boosters,” but in actuality, it’s the publishers’ licensing system that is a modern, made-up invention. The works themselves are unchanged, but the nature of digital delivery allows publishers to charge people in new ways. There is nothing in the Copyright Act that states ebook licensing is, or should be, the default way for libraries to acquire and lend books. 

Commercial vs. Non-Profit Use

One of the most criticized aspects of the decision below is the lower court’s conclusion that IA’s activities are commercial, as opposed to non-profit. The publisher’s brief enthusiastically embraces this conclusion, while also attempting to drive a wedge between IA’s lending and that of other libraries: “IA’s practices are distinctly commercial – especially in comparison to public and academic libraries.” 

The district court concluded that IA’s activity was commercial because it “stands to profit” through its partnership with Better World Books on its website, and by “us[ing] its Website to attract new members, solicit donations, and bolster its standing in the library community” (p. 26).

As many amici pointed out earlier in the appeal, the use of a nonprofit’s website to solicit donations is routine; it would be chilling for sites like Wikipedia, Project Gutenberg, Hathitrust and others (all of whom filed briefs in this case) to face heightened copyright liability just because they seek donations in combination with aspects of their sites that rely on a fair use assertion.  The publishers attempt to distance themselves from this absurd result (“The concern that Judge Koeltl’s analysis “would render virtually all nonprofit uses commercial” is wildly overblown”), but it is clear from the number and diversity of amici who filed to speak to just this issue that the concern is very real. 

As for Better World Books (BWB): BWB  is an online bookstore and a Certified B Corporation, meaning that it achieves high standards of social and environmental performance, transparency, and accountability. B Corps are committed to using business as a force for good in the world. According to its website, BWB donates books to nonprofit organizations, including the Internet Archive. As of November 2019, IA and BWB have a partnership to digitize books for preservation purposes. 

The focus on the supposedly commercial relationship with Better World Books (a used book reseller) seems to us a stretch based on the facts. The publishers’ brief makes a big deal of Better World Books (referencing them over 20 times in the brief), and argues that IA’s use is commercial because a)  IA encourages readers to purchase books through links on its site to Better World Books, and b) Better World Books donates some funds back to IA.  The first point is perplexing–one would think they’d be pleased that readers are encouraged to purchase copies of their books–even if on the used market. But the later point about BetterWorld Books’ commercial influence on IA’s operation is just not rooted in the facts of the case. As IA laid out in its opening brief, it has only received $5,561.41 from Better World Books in the relevant time frame.  That’s an infinitesimally small drop in the bucket compared to the costs that IA has borne to digitize and lend books for no monetary return from readers. It’s hard to see how such an amount could be construed to tilt IA’s entire operation into a commercial activity. 

For anyone who has actually worked on such projects, it is clear that IA is not archiving or lending books for commercial purposes. The idea that there is money to be made in doing so is laughable. Instead, it is providing access to knowledge and cultural heritage. This fundamental point somehow got lost on the publishers on the road to enormous profits.

eBooks vs. Digitized Books

There are lots of nuances that got lost in the decision below, which we believe were helpfully addressed by amici filings earlier in this appeal (e.g., the privacy implications of licensed ebooks vs. CDL copies lent by libraries).  The publishers seem happy to gloss over the details again in this brief, particularly when it comes to the differences between licensed ebooks and those that are lent out with CDL. 

First, the publisher’s brief makes clear they really don’t like it when books are available for free.. They use the word 33 times (about every other page of the brief)! Many of the references obscure what “free” really means though –  for example, asserting that  “Two Publishers believe that 39-50% of American ebook consumers read their ebooks for free from libraries rather than paying for their own commercial ebooks” (emphasis added) while ignoring the exorbitant costs and other burdens placed on libraries and the public to fund that licensed access. This is a major part of why libraries have responded both by embracing CDL and by advocating for laws that would require fair licensing terms for ebooks. . 

Second, as far as market harm goes, the Publisher’s assert that “IA offered the Publishers’ library and consumer customers a free competing substitute to the authorized ebook editions” essentially arguing that “you can’t compete with free.” But, that is just not true.  Examples are trivially easy to conjure up open source software vs. Microsoft or iOS. How often do you run into someone who uses Libre Open Office, or Ubuntu? And of course in creative industries, we’ve seen this kind of model take hold in numerous areas, including book publishing, with “freemium” models.’

That’s because products that are free often offer a different user experience than those that aren’t. Usually when someone opts to pay, they’re paying for an enhanced experience. The same holds true of books scanned for CDL vs. licensed ebooks. CDL books are just that – they are digitized physical books. They don’t have the nice, crisp text of licensed ebooks, nor the interactive features. You can’t highlight, or change the font, or look up a word by touching it, or do any of the myriad of functions that you can with an ebook. 

That a library is loaning and controlling those copies is also a major distinguishing factor, because borrowing a book from a library (along with all the special privacy protections one receives) provides a vastly different reading environment than one in which vendors can scrape, process and sell data about your reading experience. Notably, the publishers did not engage with this argument. 

“IA refuses to pay the customary price and join the Publishers’ thriving market for authorized library ebooks…”

Good gravy! According to the publishers, libraries should be forced to pay over and over again for the same book, to join a market for which there is no evidence that they are harming. 

The publishers’ devote a large portion of their brief – nearly 20 pages– to arguing about market harm. Most of it comes down to the assertion that mere fact of the existence of a digital book market means that  CDL must negatively impact the rightsholders’ profits (despite no empirical evidence of market harm). The lower court decision stated that IA has the “burden to show a lack of market harm” (p. 43), and concluded (without reference to meaningful evidence) that “that harm here is evident” (p. 44), an assumption which the publishers are happy to rest on. 

There is a genuinely important legal question raised here about which party needs to prove what when it comes to market harm. The publisher’s brief relies heavily on the idea that IA bears the burden on every point of its fair use defense, especially market harm. But as IA points out in its opening brief, 

“Although the Supreme Court has stated fair use is an affirmative defense for which defendants bear the burden (Campbell, 510 U.S. at 1177), it has also suggested this burden may apply differently to noncommercial uses than commercial ones. Sony stated that noncommercial cases require “a showing by a preponderance of the evidence that some meaningful likelihood of future harm exists.” 464 U.S. at 417; see Princeton Univ. Press v. Mich. Document Servs., Inc., 99 F.3d 1381, 1385- 86 (6th Cir. 1996) (“The burden of proof as to market effect rests with the copyright holder if the challenged use is of a ‘noncommercial’ nature.”). 

Conclusion

The brief is predictably hyperbolic, and continues to refuse to allow for any room for digital lending based on a misreading, in our view, of precedents such as Sony, TVEyes, and ReDigi. But, CDL is not some form of library-sanctioned piracy. CDL is based in copyright, fair use, and the public mission of libraries, while also broadening access to the books that library systems spend billions of dollars to collect and maintain for the public—including long-neglected, out-of-print books with enormous social and scholarly value and books for which commercial ebook licenses are not available.

During the pandemic, the importance of digital library access became strikingly apparent. It is unfortunate that the Publishers chose that moment of national emergency to sue a non-profit library for loaning books digitally. CDL simply seeks to preserve the library’s long-established and vital mission to collect and lend books in an increasingly licensed-access digital world.

Authors Alliance 10th Anniversary Event: Authorship in an Age of Monopoly and Moral Panics

Register here for this IN-PERSON event
hosted in San Francisco at the Internet Archive on May 17

Moral panics about technology are nothing new for creators. Copyright, in particular, has been a favorite tool to excite outrage. We were told that the motion picture industry would “bleed and bleed and hemorrhage” if the law didn’t prohibit VCRs. Because of the photocopier, industry experts warned that “the day may not be far off when no one need purchase book.” MP3 players, we were told, would leave us with no professional musicians, but only amateurs. 

Today, we are told that librarians lending books online will undo the publishing industry, and that AI will destroy entire creative industries as we know them.  At the same time, authors face real and unprecedented challenges in reaching readers, working within an increasingly consolidated publishing marketplace, a concentrated technology stack that seems aimed at optimizing ad revenue over all else, and a labyrinth of private agreements over which authors have almost no say. 

So what’s real and what’s hyperbole? Join us on May 17th to celebrate Authors Alliance’s 10th anniversary and be part of an engaging discussion with leading experts to cut through the hype and hear about the real challenges and opportunities facing authors who want to be read. 

The event will include a keynote address from author, activist, and journalist Cory Doctorow, as well as a series of panel discussions with leading experts on authorship, law, technology, and publishing. More details about panels will be posted in the coming weeks.

Register here
Hosted in person in San Francisco at the Internet Archive
May 17, 2024
4:00pm to 7:00pm
Reception to Follow

For those of you who can’t join us in person, the event will be recorded and video shared out to Authors Alliance members (so if you aren’t a member, join (for free) today!)

Why Fair Use Supports Non-Expressive Uses

Posted February 29, 2024

This post is part of Fair Use Week series, cross-posted at https://sites.harvard.edu/fair-use-week/2024/02/29/fair-use-week-2024-day-four-with-guest-expert-dave-hansen/

AI programs and their outputs raise all sorts of interesting questions–now found in the form of some 20+ lawsuits, many of them massive class actions.

One of the most important questions is whether it is permissible to use copyrighted works as training data to develop AI models themselves, on top of which AI services like ChatGPT are built (read here for a good overview of the component parts and “supply chain” of generative AI, reviewed through a legal lens).

For the question of fair use of AI training data, you’ll find that almost everyone writing about this question in the US context says the answer turns on two or three precedents–especially the Google Books case and the HathiTrust case–and a concept referred to as “non-expressive use” (or sometimes “non-consumptive use”).  This concept of non-expressive use and those cases have proven to be foundational for all sorts of applications that extend well beyond generative AI, including basic web search, plagiarism detection tools, and text and data mining research. Since this idea has received so much attention, I thought this fair use week was a good opportunity to explore what this concept is. 

What is non-expressive use? 

Non-expressive use refers to uses that involve copying, but don’t communicate the expressive aspects of the work to be read or otherwise enjoyed. It is a term coined, as far as I can tell, by law professor Matthew Sag in a series of papers titled “Copyright and Copyright Reliant Technology” (in which he observes that courts have been approving of such uses–for example in search engine cases–albeit without a coherent framework) and then more directly in “Orphan Works as Grist for the Data Mill” and later in an article titled “The New Legal  Landscape for Text Mining and Machine Learning.”  You can do much better than this blog post if you just read Matt’s articles. But, since you’re here, the argument is basically built on two propositions:  

Proposition #1: “Facts are not copyrightable”  is a phrase you’ll hear somewhere near the beginning of the lecture on copyright 101. It, along with the “idea-expression” dichotomy and some related doctrines are some of the ways that copyright law draws a line between protected content and those underlying facts and ideas that anyone is free to use. These protections for free use of facts and ideas are more than just a line in the sand drawn by Congress or the courts. As the U.S. Supreme Court in Eldred v. Ashcroft most recently explained: 

“[The]idea/expression dichotomy strike[s] a definitional balance between the First Amendment and the Copyright Act by permitting free communication of facts while still protecting an author’s expression. Due to this distinction, every idea, theory, and fact in a copyrighted work becomes instantly available for public exploitation at the moment of publication.” (citations and quotations omitted). 

The law has therefore recognized the distinction between expressive non-expressive works (for example, copyright exists in a novel, but not in a phone book), and that this distinction is so important that the Constitution mandates it. The exact contours of this line have been the subject of a long and not always consistent history, but has slowly come into focus in cases from  Baker v. Selden (1879) (“there is a clear distinction between the book, as such, and the art which it is intended to illustrate”) to Feist Publications v. Rural Telephone (1994) (no copyright in telephone white pages). 

Proposition #2: Fair use is also one of the Copyright Act’s First Amendment safeguards, per the Supreme Court Eldred. The “transformative use” analysis, in particular, does a lot of work in giving breathing room for others to use existing works in ways that allow for their own criticism and comment. It also has provided ample space for uses that rely on copying to unearth facts and ideas contained within and about underlying works, particularly when doing so in a way that provides a net social benefit. 

Transformative use, though not always easy to define in practice, favors uses that avoid substituting for the original expression, but that reuse that content in new ways, with new meaning, message and purpose. While this can apply to downstream expressive uses (e.g., parody is the paradigmatic example that relies on reusing expression itself), its application to non-expressive uses can look even stronger. This is why you find courts like the 9th Circuit in a case about image search saying things like “a search engine may be more transformative than a parody because a search engine provides an entirely new use for the original work, while a parody typically has the same entertainment purpose as the original work,” where search engines copy underlying works primarily for the purpose of helping users discover them. 

Fair use for non-expressive use

We now have several cases that address non-expressive uses for computational analysis of texts.   The three cases, in particular, are iParadigms v. ex rel Vanderhye,  in which the Fourth Circuit in 2009 analyzed a plagiarism detection tool that ingested papers and then created a “digital fingerprint” to match them to duplicate content using a statistical technique originally designed to analyze brain waves. The court there concluded that “iParadigms’ use of these works was completely unrelated to expressive content” and therefore constituted transformative fair use. Then in Authors Guild v. HathiTrust and Authors Guild v. Google, we saw the Second Circuit in successive opinions in 2014 and 2015 approve of copying at a massive scale of books used for the purpose of full-text search of those books and related computational, analytical uses. The court, in Google Books, fully briefed on the implications of these projects for computational analysis of texts, explained: 

As with HathiTrust (and iParadigms), the purpose of Google’s copying of the original copyrighted books is to make available significant information about those books, permitting a searcher to identify those that contain a word or term of interest, as well as those that do not include reference to it. In addition, through the ngrams tool, Google allows readers to learn the frequency of usage of selected words in the aggregate corpus of published books in different historical periods. We have no doubt that the purpose of this copying is the sort of transformative purpose described in Campbell.

Example from the Digital Humanities Scholars brief in the Google Books case,
illustrating one text mining use enabled by the Google Books corpus. 

So, back to AI 

There are certainly limits to how much of an underlying work can be described before one crosses the line from non-expressive to substantial use of expressive content. For example, uses that reproduce extensive facts from underlying works to merely repackage content for the same purpose as the original works may face challenges, as in the case of Castlerock Entertainment v. Carol Publishing(about Carol Publishing’s “Seinfeld Aptitude Test” based on facts from the Seinfeld series), which the court concluded as made merely to “repackage Seinfeld to entertain Seinfeld viewers.” And there are real questions (discussed in two excellent recent essays, here and here) about how the law may respond in practice to AI products, particularly ones where outputs look–or at least can be made to look–suspiciously similar to inputs used as training data.

How AI models work is explained much more thoroughly (and much better) elsewhere, but the basic idea is that they are built by developing extraordinarily robust word vectors used to represent the relationships between words. To do this well, these models need to train on a large and diverse set of texts to build out a good model of how humans communicate in a variety of contexts. In short, these copy texts for the purpose of developing a model to describe facts about the underlying works and the relationship of words within them and with each other. What’s new is that we can now do this at a level of complexity and scale almost unimaginable before. Scale and complexity don’t change the underlying principles at issue, however,  and so this kind of training seems to me clearly within the bounds of non-expressive use as approved already by the courts in the cases cited above that authors, researchers, and the tech industry have been relying on for nearly a decade. 

Fair Use Week Webinar: Fair Use in Text Data Mining and Artificial Intelligence

Posted February 16, 2024
Text Miner, generated by MidJourney

Computational research techniques such as text and data mining (TDM) hold tremendous opportunities for researchers across the disciplines ranging from mining scientific articles to create better systematic reviews, or curated chemical property datasets to building a corpus of films to understand how concepts of gender, race, and identity are shared over time. Unfortunately, legal uncertainty, whether through copyright or restrictive terms of use can stifle this research. Recent copyright lawsuits, such as the high-profile cases brought against Microsoft, Github, and StabiltyAI underscore the legal complications.

So how can fair use allow for computational research techniques? Join us for this Fair Use Week webinar, co-sponsored with the the Library Copyright Institute, to find out! 

Wednesday, February 28, 2024
1pm – 2:30pm ET / 10am – 11:30 PT
Register here

We’ve written quite a bit about fair use in TDM and AI for research applications already, and the topic is certainly complicated. Join us for this event to hear live from legal experts and researchers. We plan to include substantial time for Q&A, so bring your questions! Panelists include: 

  • Dave Hansen, Executive Director, Authors Alliance
  • Rachael Samberg, Scholarly Communications Officer, UC Berkeley
  • Lauren Tilton, Claiborne Robins Professor of Liberal Arts and Digital Humanities, University of Richmond

Book Talk: Wrong Way by Joanna McNeil

Posted February 13, 2024

Join us for a VIRTUAL book talk with author Joanne McNeil about her latest book, WRONG WAY, which examines the treacherous gaps between the working and middle classes wrought by the age of AI. McNeil will be in conversation with author Sarah Jaffe.

This is the first Internet Archive / Authors Alliance book talk for a work of fiction! Come for a reading, stay for a thoughtful conversation between McNeil & Jaffe about the labor implications of artificial intelligence.

February 29 @ 10am PT / 1pm ET
VIRTUAL

REGISTER NOW

WRONG WAY was named one of the best books of 2023 by the New Yorker and Esquire. It was the Endless Bookshelf Book of the Year and named one of the best tech books by the LA Times.

“Wrong Way is a chilling portrait of economic precarity, and a disturbing reminder of how attempts to optimize life and work leave us all alienated.”
—Adrienne Westenfeld, Esquire

For years, Teresa has passed from one job to the next, settling into long stretches of time, struggling to build her career in any field or unstick herself from an endless cycle of labor. The dreaded move from one gig to another is starting to feel unbearable. When a recruiter connects her with a contract position at AllOver, it appears to check all her prerequisites for a “good” job. It’s a fintech corporation with progressive hiring policies and a social justice-minded mission statement. Their new service for premium members: a functional fleet of driverless cars. The future of transportation. As her new-hire orientation reveals, the distance between AllOver’s claims and its actions is wide, but the lure of financial stability and a flexible schedule is enough to keep Teresa driving forward.

Joanne McNeil, who often reports on how the human experience intersects with labor and technology brings blazing compassion and criticism to Wrong Way, examining the treacherous gaps between the working and middle classes wrought by the age of AI. Within these divides, McNeil turns the unsaid into the unignorable, and captures the existential perils imposed by a nonstop, full-service gig economy.

REGISTER NOW

About our speakers

JOANNE MCNEIL was the inaugural winner of the Carl & Marilynn Thoma Art Foundation’s Arts Writing Award for an emerging writer. She has been a resident at Eyebeam, a Logan Nonfiction Program fellow, and an instructor at the School for Poetic Computation.
Joanne is the author of Lurking: How a Person Became a User.

SARAH JAFFE is an author, independent journalist, and a co-host of Dissent magazine’s Belabored podcast.

Book Talk: Wrong Way by Joanne McNeil
February 29 @ 10am PT / 1pm ET
VIRTUAL
Register now!

A Copyright Small Claims Update: Defaults and Failure to Opt Out

Posted February 1, 2024

We’ve been tracking for a few years the new copyright small claims court known as the Copyright Claims Board. My last update was in September when I posted a summary of a paper I wrote with Katie Fortney summarizing data about the first year of operations of the court (thanks entirely to Katie for doing the hard work of extracting that data and sharing it in an easy-to-understand format). 

As explained then, the CCB has been slow in processing cases; it only entered a final judgment on the merits in one case when I last wrote. It has now issued a total of 18 final determinations, about half of which are default determinations (cases where the respondent failed to appear or refused to participate in the CCB process). The facts for most of these cases are not very interesting, but two of the most recent caught my attention. 

Oakes v. Heart of Gold Pageant System

The first case, Oakes v. Heart of Gold Pageant System Inc., highlights a concern from opponents of the CCB when it was being debated in Congress. Namely, the CCB’s ability to make default determinations could be a trap for the unwary defendants who don’t understand what the CCB is, what a case before it could mean for them, or what their rights are to opt out of a CCB proceeding. 

The facts are unspectacular: Oakes, a professional photographer represented by Higbee & Associates, filed a CCB complaint against Heart of Gold and its owner, Angel Jameson, for using photographs taken by Oakes on Heart of Gold’s Facebook page and in materials for events it sponsored. Oakes originally filed the claim in July 2022 and then refiled it in August 2022 with some corrections. Oakes then provided the CCB with the required proof of service (proof that Oakes had adequately informed Heart of Gold and Jameson of the CCB claim) in October 2022. 

At this point, the ball was in Heart of Gold and Jameson’s court; she could either respond and defend her use, or (if done within 60 days of service) opt out of the CCB proceeding altogether. Unfortunately for her, she did neither, which resulted in a default determination against her for $4,500. 

We learn in the final determination a little more about Jameson’s lack of participation. As the CCB recounts in its final default determination: 

“At multiple points in this procedural history, Jameson has contacted the CCB, and after communicating with staff, has affirmed each time her intent to not participate in this proceeding.”

“Jameson initially contacted the Board in response to this Zoom link, expressing her disbelief that the Board is a government tribunal.”

“Jameson then sent another email in response to the First Default, requesting an ‘official day in court.’”

“In a subsequent call with CCB staff in March, Jameson indicated that she would not participate.”

“Shortly after the order scheduling the hearing, Jameson contacted the U.S. Copyright Office’s Public Information Office, who placed her in contact with CCB staff. In a follow-up call, CCB staff again explained the proceeding and Jameson again affirmed that she would not participate in the proceeding.”

Jameson missed her opportunity to opt out early in the case – she had a sixty-day window to do so, as defined by CCB regulations. So, her protests later were ineffective to opt out, even though it seems clear that she did not want her case to be heard by the CCB. 

Joe Hand Promotions v. Dawson 

A second default determination case offers a slightly different view of how the CCB treats defaults. The facts are similarly straightforward: Joe Hand is a company that “specializes in commercially licensing premier sporting events to commercial locations such as bars, restaurants, lounges, clubhouses, and similar establishments.” Joe Hand had obtained the exclusive right to sell pay-per-view access to a boxing event–” Deontay Wilder vs. Tyson Fury II,” to commercial establishments, including bars. Joe Hand provided evidence that a California bar, “Bottoms Up,” had shown the match without permission. 

Joe Hand (a frequent filer with the CCB, with 33 cases to its name) ran into a problem in this case, however, because it didn’t actually file its case against Bottoms Up, but instead against the individual that is listed on the bar’s liquor license and ownership documents, Mary Dawson. Even in Dawson’s absence, the CCB was unwilling to rubber-stamp Joe Hand’s claims against her. The final determination explained, 

Beyond the conclusory and clearly boilerplate allegations in the Claim that Dawson (and now-dismissed respondent Giglio) ‘owned, operated, maintained, and controlled the commercial business known as Bottoms Up Bar & Grill’ and ‘had a right and ability to supervise the activities of the Establishment on the date of the Program and had an obvious and direct financial interest in the activities of the Establishment on the date of the Program’ (Dkt. 1), Claimant offers absolutely no information linking Respondent to the infringement.” 

I will spare you the details, but the CCB went on to cite case after case explaining why courts have routinely rejected such boilerplate claims, and required plaintiffs to at least allege meaningful facts connecting an individual to an act of infringement.  Even in this default case where Dawson was not present to defend herself, the CCB put in the effort on her behalf. 

Takeaways

I have a few observations. In the first case, given that Jameson clearly did not want her case heard before the CCB, I think it would have been fair for the CCB to allow her a second chance to opt out. At least on the record we have available, there is no indication that the CCB offered her that chance.  Although the normal opt-out period extends only sixty days after service, the CCB opt-out regulations also state that “the Board may extend the 60-day period to opt out in exceptional circumstances and in the interests of justice.” 

It seems to me, given the newness of the CCB system, the small number of cases filed to date, and the relative lack of awareness among most people that the CCB is a legitimate government forum (Jameson expressed such doubt herself), the “interests of justice” may well dictate a more flexible approach at least at the outset of operations of the CCB. 

The CCB has demonstrated an extraordinary willingness to offer helpful guidance, flexibility, and multiple opportunities to claimants, and so respondents may have expected a similar approach to help them along through the process. At least in this case, we see a more stringent approach. An obvious takeaway for respondents then is to pay attention to notices about CCB claims and associated deadlines, and opt-out early on in the process if they think they don’t want their case heard there. 

The Dawson case, however, does show that the CCB isn’t willing to let claimants make unsubstantiated claims against absent respondents. Though Joe Hand is surely familiar with the process and it would have been easy for the CCB to accept its barebones allegations against Dawson as true, the CCB made the case itself–with ample legal support–that even claims against absent respondents require claimants to make a real case. 

Overall, these are just two cases,  so I don’t want to read into them too much. But it’s already looking like a large portion of CCB cases will be defaults (10 out of the 18 final determinations to date, and more than half of the existing active cases are trending in that direction). So, it’s good to keep an eye on how the CCB will treat these types of cases, given the risks they pose for unwary and uninformed respondents. 

Authors Alliance 2023 Annual Report

Posted January 23, 2024

Authors Alliance is pleased to share our 2023 annual report, where you can find highlights of our work in 2023 to promote laws, policies, and practices that enable authors to reach wide audiences. In the report, you can read about how we’re helping authors meet their dissemination goals for their works, representing their interests in the courts, and otherwise working to advocate for authors who write to be read. 

Click here to read the full report.

Hachette v. Internet Archive: Amicus Briefs on Non-Commercial Use

Posted January 19, 2024

Our last post highlighted one of the amicus briefs filed in the Hachette v. Internet Archive lawsuit, which made the point that controlled digital lending serves important privacy interests for library readers. Today I want to highlight a second new issue introduced on appeal and addressed by almost every amici: the proper way to assess whether a given use is “non-commercial.”

“Non-commercial” use is important  because the first fair use factor directs courts to assess “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” Before the district court, neither Internet Archive (IA) nor amici who filed in support of IA paid considerable attention to arguing about whether IA’s use was commercial, I think because it seemed so clear that lending books for free to library patrons appeared to us a paradigmatic example of non-commercial use. It came as a shock, therefore, when the District Court in this case concluded that “IA stands to profit” from its use and that the use was therefore commercial. 

The Court’s reasoning was odd. While it recognized that IA “is a non-profit organization that does not charge patrons to borrow books and because private reading is noncommercial in nature,” the court concluded that because IA gains “an advantage or benefit from its distribution and use” of the works at issue, its use was commercial. Among the “benefits” that the court listed: 

  • IA exploits the Works in Suit without paying the customary price
  • IA uses its Website to attract new members, solicit donations, and bolster its standing in the library community.
  • Better World Books also pays IA whenever a patron buys a used book from BWB after clicking on the “Purchase at Better World Books” button that appears on the top of webpages for ebooks on the Website.

Although almost every amici addressed the problems with this approach to “non-commercial” use, three briefs, in particular, added important additional context, explaining both why the district court was wrong on the law and why its rule would have dramatically negative implications for other libraries and nonprofit organizations. 

First, the Association of Research Libraries and the American Library Association, represented by Brandon Butler, make a forceful legal argument in their amicus brief about why the district court’s baseline formulation of commerciality (benefit without paying the customary price) was wrong: 

The district court’s determination that the Internet Archive (“IA”) was engaged in a “commercial” use for purposes of the first statutory factor is based on a circular argument that seemingly renders every would-be fair use “commercial” so long as the user benefits in some way from their use. This cannot be the law, and in the Second Circuit it is not. The correct standard is clearly stated in American Geophysical Union v. Texaco Inc., 60 F. 3d 913 (2d Cir. 1994), a case the district court ignored entirely.

ARL and ALA then go on to highlight numerous examples of appellate courts (including the Second Circuit) rejecting this approach such as in the 11th Circuit in the Georgia State E-reserves copyright lawsuit: “Of course, any unlicensed use of copyrighted material profits the user in the sense that the user does not pay a potential licensing fee, allowing the user to keep his or her money. If this analysis were persuasive, no use could qualify as ’nonprofit’ under the first factor.” 

Second was an amicus brief by law professor Rebecca Tushnet on behalf of Intellectual Property Law Scholars, explaining both whycopyright law and fair use favor non-commercial uses, and how IA’s uses fall squarely within the public-benefit objectives of the law. The brief begins by highlighting the close connection between non-commercial use and the goals of copyright: 

The constitutional goal of copyright protection is to “promote the progress of science and useful arts,” Art. I, sec. 1, cl. 8, and the first copyright law was “an act for the encouragement of learning,” Cambridge University Press v. Patton, 769 F.3d 1232, 1256 (11th Cir. 2014). This case provides an opportunity for this Court to reaffirm that vision by recognizing the special role that noncommercial, nonprofit uses play in supporting freedom of speech and access to knowledge. 

The IP Professors Brief then goes on to highlight the many ways that Congress has indicated that library lending should be treated favorably because it furthers objectives of supporting learning, and how the court’s constrained reading of “non-commercial” is actually in conflict with how that term is used elsewhere in the Copyright Act (for example, Sections 111, 114, and 118 for non-commercial broadcasters, or Section 1008 for non-commercial consumers who copy music). The brief then goes on to make a strong case for why the district court wasn’t only mistaken, but that library lending should presumptively be treated as non-commercial. 

Finally, we see the amicus brief from the Wikimedia Foundation, Creative Commons, and Project Gutenberg, represented by Jef Pearlman and a team of students at the USC IP & Technology Law Clinic. Their brief highlighted in detail the practical challenges that the district court’s approach to non-commercial use would pose for all sorts of online nonprofits. The brief explains how nonprofits that raise money will inevitably include donation buttons on pages with fair use content, rely on volunteer contributions, and engage in revenue-generated activities to support their work, which in some cases require millions of dollars for technical infrastructure. The brief explains: 

The district court defined “commercial” under the first fair use factor far too broadly, inextricably linking secondary uses to fundraising even when those activities are, in practice, completely unrelated. In evaluating what constitutes commercial use, the district court misapplied several considerations and ignored other critical considerations. As a result, the district court’s ruling threatens nonprofit organizations who make fair use of copyrighted works. Adopting the district court’s approach would threaten both the processes of nonprofit fundraising and the methods by which educational nonprofits provide their services.

Licensing research content via agreements that authorize uses of artificial intelligence

Posted January 10, 2024
Photo by Hal Gatewood on Unsplash

This is a guest post by Rachael G. Samberg, Timothy Vollmer, and Samantha Teremi, professionals within the Office of Scholarly Communication Services at UC Berkeley Library. 

On academic and library listservs, there has emerged an increasingly fraught discussion about licensing scholarly content when scholars’ research methodologies rely on artificial intelligence (AI). Scholars and librarians are rightfully concerned that non-profit educational research methodologies like text and data mining (TDM) that can (but do not necessarily) incorporate usage of AI tools are being clamped down upon by publishers. Indeed, libraries are now being presented with content license agreements that prohibit AI tools and training entirely, irrespective of scholarly purpose. 

Conversely, publishers, vendors, and content creators—a group we’ll call “rightsholders” here—have expressed valid concerns about how their copyright-protected content is used in AI training, particularly in a commercial context unrelated to scholarly research. Rightsholders fear that their livelihoods are being threatened when generative AI tools are trained and then used to create new outputs that they believe could infringe upon or undermine the market for their works.

Within the context of non-profit academic research, rightsholders’ fears about allowing AI training, and especially non-generative AI training, are misplaced. Newly-emerging content license agreements that prohibit usage of AI entirely, or charge exorbitant fees for it as a separately-licensed right, will be devastating for scientific research and the advancement of knowledge. Our aim with this post is to empower scholars and academic librarians with legal information about why those licensing outcomes are unnecessary, and equip them with alternative licensing language to adequately address rightsholders’ concerns

To that end, we will: 

  1. Explain the copyright landscape underpinning the use of AI in research contexts;
  2. Address ways that AI usage can be regulated to protect rightsholders, while outlining opportunities to reform contract law to support scholars; and 
  3. Conclude with practical language that can be incorporated into licensing agreements, so that libraries and scholars can continue to achieve licensing outcomes that satisfy research needs.

Our guidance is based on legal analysis as well as our views as law and policy experts working within scholarly communication. While your mileage or opinions may vary, we hope that the explanations and tools we provide offer a springboard for discussion within your academic institutions or communities about ways to approach licensing scholarly content in the age of AI research.

Copyright and AI training

As we have recently explored in presentations and posts, the copyright law and policy landscape underpinning the use of AI models is complex, and regulatory decision-making in the copyright sphere will have ramifications for global enterprise, innovation, and trade. A much-discussed group of lawsuits and a parallel inquiry from the U.S. Copyright Office raise important and timely legal questions, many of which we are only beginning to understand. But there are two precepts that we believe are clear now, and that bear upon the non-profit education, research, and scholarship undertaken by scholars who rely on AI models. 

First, as the UC Berkeley Library has explained in greater detail to the Copyright Office, training artificial intelligence is a fair use—and particularly so in a non-profit research and educational context. (For other similar comments provided to the Copyright Office, see, e.g., the submissions of Authors Alliance and Project LEND). Maintaining its continued treatment as fair use is essential to protecting research, including TDM. 

TDM refers generally to a set of research methodologies reliant on computational tools, algorithms, and automated techniques to extract revelatory information from large sets of unstructured or thinly-structured digital content. Not all TDM methodologies necessitate usage of AI models in doing so. For instance, the words that 20th century fiction authors use to describe happiness can be searched for in a corpus of works merely by using algorithms looking for synonyms and variations of words like “happiness” or “mirth,” with no AI involved. But to find examples of happy characters in those books, a researcher would likely need to apply what are called discriminative modeling methodologies that first train AI on examples of what qualities a happy character demonstrates or exhibits, so that the AI can then go and search for occurrences within a larger corpus of works. This latter TDM process involves AI, but not generative AI; and scholars have relied non-controversially on this kind of non-generative AI training within TDM for years. 

Previous court cases like Authors Guild v. HathiTrust, Authors Guild v. Google, and A.V. ex rel. Vanderhye v. iParadigms have addressed fair use in the context of TDM and confirmed that the reproduction of copyrighted works to create and conduct text and data mining on a collection of copyright-protected works is a fair use. These cases further hold that making derived data, results, abstractions, metadata, or analysis from the copyright-protected corpus available to the public is also fair use, as long as the research methodologies or data distribution processes do not re-express the underlying works to the public in a way that could supplant the market for the originals. 

For the same reasons that the TDM processes constitute fair use of copyrighted works in these contexts, the training of AI tools to do that text and data mining is also fair use. This is in large part because of the same transformativeness of the purpose (under Fair Use Factor 1) and because, just like “regular” TDM that doesn’t involve AI, AI training does not reproduce or communicate the underlying copyrighted works to the public (which is essential to the determination of market supplantation for Fair Use Factor 4). 

But, while AI training is no different from other TDM methodologies in terms of fair use, there is an important distinction to make between the inputs for AI training and generative AI’s outputs. The overall fair use of generative AI outputs cannot always be predicted in advance: The mechanics of generative AI models’ operations suggest that there are limited instances in which generative AI outputs could indeed be substantially similar to (and potentially infringing of) the underlying works used for training; this substantial similarity is possible typically only when a training corpus is rife with numerous copies of the same work. And a recent case filed by the New York Times addresses this potential similarity problem with generative AI outputs.  

Yet, training inputs should not be conflated with outputs: The training of AI models by using copyright-protected inputs falls squarely within what courts have already determined in TDM cases to be a transformative fair use. This is especially true when that AI training is conducted for non-profit educational or research purposes, as this bolsters its status under Fair Use Factor 1, which considers both transformativeness and whether the act is undertaken for non-profit educational purposes. 

Were a court to suddenly determine that training AI was not fair use, and AI training was subsequently permitted only on “safe” materials (like public domain works or works for which training permission has been granted via license), this would curtail freedom of inquiry, exacerbate bias in the nature of research questions able to be studied and the methodologies available to study them, and amplify the views of an unrepresentative set of creators given the limited types of materials available with which to conduct the studies.

The second precept we uphold is that scholars’ ability to access the underlying content to conduct fair use AI training should be preserved with no opt-outs from the perspective of copyright regulation. 

The fair use provision of the Copyright Act does not afford copyright owners a right to opt out of allowing other people to use their works in any other circumstance, for good reason: If content creators were able to opt out of fair use, little content would be available freely to build upon. Uniquely allowing fair use opt-outs only in the context of AI training would be a particular threat for research and education, because fair use in these contexts is already becoming an out-of-reach luxury even for the wealthiest institutions. What do we mean?

In the U.S., the prospect of “contractual override” means that, although fair use is statutorily provided for, private parties like publishers may “contract around” fair use by requiring libraries to negotiate for otherwise lawful activities (such as conducting TDM or training AI for research). Academic libraries are forced to pay significant sums each year to try to preserve fair use rights for campus scholars through the database and electronic content license agreements that they sign. This override landscape is particularly detrimental for TDM research methodologies, because TDM research often requires use of massive datasets with works from many publishers, including copyright owners who cannot be identified or who are unwilling to grant such licenses. 

So, if the Copyright Office or Congress were to enable rightsholders to opt-out of having their works fairly used for training AI for scholarship, then academic institutions and scholars would face even greater hurdles in licensing content for research. Rightsholders might opt out of allowing their work to be used for AI training fair uses, and then turn around and charge AI usage fees to scholars (or libraries)—essentially licensing back fair uses for research. 

Fundamentally, this undermines lawmakers’ public interest goals: It creates a risk of rent-seeking or anti-competitive behavior through which a rightsholder can demand additional remuneration or withhold granting licenses for activities generally seen as being good for public knowledge or that rely on exceptions like fair use. And from a practical perspective, allowing opt-outs from fair uses would impede scholarship by or for research teams who lack grant or institutional funds to cover these additional licensing expenses; penalize research in or about underfunded disciplines or geographical regions; and result in bias as to the topics and regions that can be studied. 

“Fair use” does not mean “unregulated” 

Although training AI for non-profit scholarly uses is fair use from a copyright perspective, we are not suggesting AI training should be unregulated. To the contrary, we support guardrails because training AI can carry risk. For example, researchers have been able to use generative AI like ChatGPT to solicit personal information by bypassing platform safeguards.

To address issues of privacy, ethics, and the rights of publicity (which govern uses of people’s voices, images, and personas), there should be the adoption of best practices, private ordering, and other regulations. 

For instance, as to best practices, scholar Matthew Sag has suggested preliminary guidelines to avoid violations of privacy and the right to publicity. First, he recommends that AI platforms avoid training their large language models on duplicates of the same work. This would reduce the likelihood that the models could produce copyright-infringing outputs (due to memorization concerns), and it would also lessen the likelihood that any content containing potentially private or sensitive information would be outputted from having been fed into the training process multiple times. Second, Sag suggests that AI platforms engage in “reinforcement learning through human feedback” when training large language models. This practice could cut down on privacy or rights of publicity concerns by involving human feedback at the point of training, instead of leveraging filtering at the output stage.  

Private ordering would rely on platforms or communities to implement appropriate policies governing privacy issues, rights of publicity, and ethical concerns. For example, the UC Berkeley Library has created policies and practices (called “Responsible Access Workflows”) to help it make decisions around whether—and how—special collection materials may be digitized and made available online. Our Responsible Access Workflows require review of collection materials across copyright, contracts, privacy, and ethics parameters. Through careful policy development, the Library applies an ethics of care approach to making available online the collection content with ethical concerns. Even if content is not shared openly online, it doesn’t mean it’s unavailable for researchers for use in person; we simply have decided not to make that content available in digital formats with lower friction for use. We aim to apply transparent information about our decision-making, and researchers must make informed decisions about how to use the collections, whether or not they are using them in service of AI.

And finally, concerning regulations, countries like those in the EU have recently introduced an AI training framework that requires, among other things, the disclosure of source content, and the rights for content creators to opt out of having their works included in training sets except when the AI training is being done for research purposes by research organizations, cultural heritage institutions, and their members or scholars. United States agencies could consider implementing similar regulations here. 

But from a copyright perspective, and within non-profit academic research, fair use in AI training should be preserved without the opportunity to opt out for the reasons we discuss above. Such an approach regarding copyright would also be consistent with the distinction the EU has made for AI training in academic settings, as the EU’s Digital Single Market Directive bifurcates practices outside the context of scholarly research

While we favor regulation that preserves fair use, it is also important to note that merely preserving fair use rights in scholarly contexts for training AI is not the end of the story in protecting scholarly inquiry. So long as the United States permits contractual override of fair uses, libraries and researchers will continue to be at the mercy of publishers aggregating and controlling what may be done with the scholarly record, even if authors dedicate their content to the public domain or apply a Creative Commons license to it. So in our view, the real work that should be done is pursuing legislative or regulatory arrangements like the approximately 40 other countries that have curtailed the ability of contracts to abrogate fair use and other limitations and exceptions to copyright within non-profit scholarly and educational uses. This is a challenging, but important, mission.

Licensing guidance in the meantime 

While the statutory, regulatory, and private governance landscapes are being addressed, libraries and scholars need ways to preserve usage rights for content when training AI as part of their TDM research methodologies. We have developed sample license language intended to address rightsholders’ key concerns while maintaining scholars’ ability to train AI in text and data mining research. We drafted this language to be incorporated into amendments to existing licenses that fail to address TDM, or into stand-alone TDM and AI licenses; however, it is easily adaptable into agreements-in-chief (and we encourage you to do so). 

We are certain our terms can continue to be improved upon over time or be tailored for specific research needs as methodologies and AI uses change. But in the meantime, we think they are an important step in the right direction.

With that in mind, it is important to understand that within contracts applying U.S. law, more specific language controls over general language in a contract. So, even if there is a clause in a license agreement that preserves fair use, if it is later followed by a TDM clause that restricts how TDM can be conducted (and whether AI can be used), then that more specific language governs TDM and AI usage under the agreement. This means that libraries and scholars must be mindful when negotiating TDM and AI clauses as they may be contracting themselves out of rights they would otherwise have had under fair use. 

So, how can a library or scholar negotiate sufficient AI usage rights while acknowledging the concerns of  publishers? We believe publishers have attempted to curb AI usage because they are concerned about: (1) the security of their licensed products, and the fear that researchers will leak or release content behind their paywall; and (2) AI being used to create a competing product that could substitute for the original licensed product and undermine their share of the market. While these concerns are valid, they reflect longstanding fears over users’ potential generalized misuse of licensed materials in which they do not hold copyright. But publishers are already able to—and do—impose contractual provisions disallowing the creation of derivative products and systematically sharing licensed content with third-parties, so additionally banning the use of AI in doing so is, in our opinion, unwarranted.

We developed our sample licensing language to precisely address these concerns by specifying in the grant of license that research results may be used and shared with others in the course of a user’s academic or non-profit research “except to the extent that doing so would substantially reproduce or redistribute the original Licensed Materials, or create a product for use by third parties that would substitute for the Licensed Materials.” Our language also imposes reasonable security protections in the research and storage process to quell fears of content leakage. 

Perhaps most importantly, our sample licensing language preserves the right to conduct TDM using “machine learning” and “other automated techniques” by expressly including these phrases in the definition for TDM, thereby reserving AI training rights (including as such AI training methodologies evolve), provided that no competing product or release of the underlying materials is made. 

The licensing road ahead

As legislation and standards around AI continue to develop, we hope to see express contractual allowance for AI training become the norm in academic licensing. Though our licensing language will likely need to adapt to and evolve with policy changes and research or technological advancements over time, we hope the sample language can now assist other institutions in their negotiations, and help set a licensing precedent so that publishers understand the importance of allowing AI training in non-profit research contexts. While a different legislative and regulatory approach may be appropriate in the commercial context, we believe that academic research licenses should preserve the right to incorporate AI, especially without additional costs being passed to subscribing institutions or individual users, as a fundamental element of ensuring a diverse and innovative scholarly record.

Join for the Launch of our Latest Legal Guide: Writing About Real People

Register here to join us on December 7, 2023 at 1pm ET/ 10am PT for the launch of our latest legal guide “Writing about Real People.”

Writing about real people can raise a number of complicated legal issues for authors. Laws governing defamation, privacy, and rights of publicity have a number of fact-specific rules,  exceptions, and exceptions to exceptions that can be difficult to navigate without help. We’ve found that these issues can be an obstacle to creation for all types of authors, from bloggers to narrative nonfiction authors to historians, cultural anthropologists, and other scholarly authors. 

As part of our highly used series of guides on legal issues for authors, Authors Alliance has created a guide to writing about real people for nonfiction authors. This latest guide covers three main legal issues: false statements and portrayals (e.g., defamation), invasions of privacy, and rights of publicity and identity rights. The guide includes substantial practical guidance, addressing issues such as permission, documenting your research and working with an IRB.

Join us on December 7 to learn more about the guide and what it covers, how you might use it in your work, about plans we have for accompanying materials we will release in the near future, such as one-page summaries for quick reference.