Today, Authors Alliance joined Public Knowledge and four other civil society groups to urge Congress to amend the Journalism Competition and Preservation Act (“JCPA”) to clarify that the bill does not expand copyright protection to article links and that authors and other internet users will not have to pay to link to articles or for the use of headlines and other snippets that fall within fair use.
The JCPA (H.R.1735 in the House and S.673 in the Senate) proposes to create a four-year “safe harbor” from antitrust law, allowing print, broadcast, and digital news companies to band together to negotiate compensation terms for their news stories with the largest online platforms. While the goal of the bill—to preserve a strong, diverse, and independent press—is commendable, the bill’s framework relies on a fundamental mischaracterization of U.S. copyright law. As currently drafted, there is a risk that the JCPA could be interpreted by courts to implicitly expand the scope of copyright.
As our letter explains, linking is outside of the scope of copyright in the U.S., as merely linking to external content does not implicate any of a copyright owner’s exclusive rights. Furthermore, the use of brief snippets of content—such as headlines, images, or short excerpts—that often accompany links are minimal quotations of copyrighted material have been consistently found to be fair uses under copyright law, and protection for these types of uses is mandated by the Berne Convention. These fair uses cannot be banned or substantially curtailed without running afoul of Supreme Court jurisprudence, the First Amendment, and multilateral international obligations.
To address these issues, our letter asks Congress to create a savings clause that makes it clear that copyright protections are not being expanded to include linking, or fair uses of snippets from the linked material. The full text of our letter is available here.
Authors Alliance is grateful to Argyri Panezifor this guest post. Panezi is an Assistant Professor at IE Law School where she teaches contracts, copyright law, and principles of LegalTech. Her current work focuses on copyright issues related to digital libraries, on law and AI (contractual and extra-contractual liability), and on legal technologies, specifically examining e-Justice developments within the EU. She is also a research fellow at the Digital Civil Society Lab at Stanford University, where she explores the notion of critical digital infrastructure as well as state and federal regulatory frameworks that govern ISPs in the context of public internet access, focusing on access for critical utilities during emergency situations. She holds a law degree from the University of Athens, an LL.M. from Harvard Law School, and a Ph.D. from the European University Institute.
On June 1, 2020, four publishing houses, Hachette Book Group, Inc., HarperCollins Publishers LLC, John Wiley & Sons, Inc., and Penguin Random House LLC, filed before the US District Court for the Southern District of New York a copyright infringement action against the Internet Archive for the Archive’s operation of what it called a “National Emergency Library” (NEL) after the first US shelter-in-place orders in response to the COVID-19 pandemic. Indeed, on March 24, 2020, the Internet Archive had announced the launch of a temporary online NEL to support “emergency remote teaching, research activities, independent scholarship, and intellectual stimulation while universities, schools, training centers, and libraries were closed due to COVID-19.” In their announcement the Archive called on authors and publishers to support the effort, which would ensure “temporary access to their work in this time of crisis.” It provided an opt-in option for authors who wanted to donate their book(s) to the NEL, and an opt-out option for authors who wanted to remove their book(s) from the NEL.
The NEL collection ceased operation on June 16, 2020, and the Internet Archive returned to its previous system of controlled lending of copyrighted works (on Controlled Digital Lending (CDL) see previous posts here and here). Even though the operation of the NEL was limited in time, argument about its propriety continues and has wider implications relating to the libraries’ multiple roles during and beyond emergencies. Stakeholders’ reactions to the NEL appear to be mixed. The Association of American Publishers, for example, issued a public statement opposing the Internet Archive’s NEL initiative. Meanwhile, the Archive has published a number of stories from librarians, educators, parents, and researchers endorsing the initiative.
The pending case, Hachette v. Internet Archive, introduces a new dimension to existing debates around electronic access to library material, particularly around e-lending, raising at least two important questions: Did the emergency created by the COVID-19 shutdowns introduce new market failures as regards access to critical educational and research material, or as regards access to library works in general—or do these emergencies merely highlight possible already-existing failures? Furthermore, can emergencies justify additional exceptions to copyright laws covering electronic access to library material, and if so, under what circumstances?
In my recent article, A Public Service Role for Digital Libraries: The Unequal Battle Against (Online) Misinformation Through Copyright Law Reform and the Emergency Electronic Access to Library Material (forthcoming, 31 Cornell J.L.& Pub. Pol’y_ _ (2021)), I examine the ongoing Hachette v. Internet Archive litigation, placing it in the context of earlier US copyright case law that deals with the digitization or the making available of copyrighted works for educational, research, and other purposes (notably: Authors Guild v. Google, Authors Guild v. HathiTrust, and Cambridge University Press v. Becker). There is also a global debate focusing on similar issues, apparent, for example, in similar cases brought before courts in Europe (Technische Universität Darmstadt v. Eugen Ulmer KG and Vereniging Openbare Bibliotheken v. Stichting Leenrech), India (University of Oxford v. Rameshwari Photocopy Service), and Canada (CCH Canadian Ltd v. Law Society of Upper Canada and the recent York University v. Access Copyright).
Taking the Hachette v. Internet Archive case as a starting point, my article reflects on the current and potential future role of copyright doctrine in preserving institutional functions of libraries and discusses how the COVID-19 emergency exposed new but also highlighted existing market failures.
Libraries’ public service role includes safeguarding and providing equal access to research, to educational material, but also to credible information, including in the digital environment. Both on- and offline libraries serve a function as trusted and, in principle, neutral places dedicated to equalizing access to credible information and knowledge in societies with structural inequalities and biases. Particularly during this pandemic, libraries have embraced their institutional role and joined the fight against misinformation, including about the pandemic. The article examines the extent to which current US copyright law supports libraries in these increasingly pertinent functions and advocates for the copyright framework to provide enhanced support to libraries.
Authors Alliance, joined by the Library Copyright Alliance and the American Association of University Professors, is petitioning the Copyright Office for a new three-year exemption to the Digital Millennium Copyright Act (“DMCA”) as part of the Copyright Office’s eighth triennial rulemaking process. If granted, our proposed exemption would allow researchers to bypass technical protection measures (“TPMs”) in order to conduct text and data mining research on literary works that are distributed electronically and motion pictures. Last week, Authors Alliance responded to questions posed by the Copyright Office as it considers the merits of the proposed exemption following last month’s public hearing on the exemption.
Text and data mining (“TDM”) refers to automated analytical techniques aimed at analyzing digital text and data in order to generate information that reveals patterns, trends, and correlations in that text or data. TDM has great potential to enable groundbreaking research and contribute to the commons of knowledge. As a highly transformative use of copyrighted works done for purposes of research and scholarship, TDM fits firmly within the ambit of fair use. But TDM researchers are currently hindered by Section 1201 of the DMCA, which prohibits the circumvention of TPMs used by copyright owners to control access to their works. Section 1201 makes TDM research on texts and films time consuming and inefficient—and in some cases, impossible—working against the promotion of the progress of knowledge and the useful arts that copyright law has been designed to incentivize.
Following last month’s hearing, the Copyright Office asked proponents and opponents of the proposed exemption to: 1) describe minimum security measures eligible institutions should be required to use to secure corpora of literary works or motion pictures used for TDM research and 2) share views on potential regulatory language that would limit a researcher’s ability to view literary works or motion pictures included in corpora. In addition, opponents were given the opportunity to respond to changes proponents proposed in our reply comment to address opponents’ concerns.
Security Standards and Controls
With respect to security measures, our response describes the flexible process that information security and data management professionals at research institutions use to select and apply security controls to research data. This approach tracks the processes laid out in international standards and federal agency procedures. We explain why these risk assessment frameworks are superior to a globally applied fixed list of minimum security requirements and how they are consistent with the Office’s approach to information security in previous exemptions.
Our letter provides examples of common and effective security controls used in many research settings, including user authentication, use of encryption, event logging, and maintaining physical security of the resources housing the data. We recommend that the Office should identify these controls as examples of reasonable security measures, while leaving room for information security departments and researchers to fine-tune the precise security controls used to the specifics of the research corpus and the information system in which it is housed.
Prohibiting Researchers from Viewing Text and Images
With respect to the extent to which regulatory language should limit a researcher’s ability to view literary works or motion pictures included in corpora, we clarify that while researchers do not need this exemption for the purpose of viewing the full text or images of the works that they or their institutions have already obtained lawfully, researchers must be able to verify their research methods and research results. This verification requires that researchers have some ability to view corpus text or images. That ability is consistent with the research environments of both HathiTrust Data Capsules and Google Book Search, and it is consistent with fair use precedent.
Our letter explains why researchers need to view enough of a corpus to verify their methods and their findings. By way of example, if an algorithm tells the researcher that frame #133292 of a corpus copy has a high probability of being a scene of violence, and that frame corresponds to a scene in the film Pulp Fiction, the researcher would not watch a copy of the DVD or digital download in its original format to verify that finding. But at some point, either the researcher or peer reviewer may need to locate and examine frame #133292, a designation that exists only in the corpus copy, to verify the algorithmic finding. Our response explains that a blanket prohibition on viewing text or images would comprehensively undermine TDM research relying on the exemption and would provide little added value or protection given the other restrictions in the proposed exemption. For this reason, we recommend that—although we do not believe an express viewability limitation is warranted—should the Office choose to include one, it should use the model of the HathiTrust Research Center’s Non-Consumptive Use Policy rather than an outright ban on viewability.
* * *
The Librarian of Congress is expected to issue a final decision on the proposed exemption in October 2021. We will keep our members and readers apprised of any updates on our proposed exemption as the process moves forward. We’re grateful to law students and faculty from the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley Law School for their work supporting our petition for this new exemption.
The case involves a claim by Access Copyright, a Canadian copyright collective, which seeks to have York University comply with an interim tariff approved by the Copyright Board of Canada for works in its collection. In response, York University brought a counterclaim seeking a declaration that its guidelines for copying materials for education purposes constituted “fair dealing” under the Copyright Act of Canada (fair dealing is the Canadian analogue to fair use). The case raises the question of whether copyright collectives can force users to license content from them, even if the users prefer to comply with their copyright obligations in other ways.
At oral argument, we began by explaining to the court that Access Copyright does not represent the interests of all authors. Authors Alliance represents authors whose primary concern is their works having the greatest possible impact by reaching the largest possible audience. Unfortunately, the flawed approach to fair dealing taken by the courts harms these interests, and undermines our members’ efforts to support education and informed public discourse, by creating a chilling effect on the dissemination of copyrighted works. Our members’ dissemination goals are advanced by a robust approach to fair dealing.
We further explained how the fair dealing factors in this case were incorrectly dealt with because they were not anchored to specific instances of alleged infringement. The abstract nature of that inquiry was a result of the lower courts’ willingness to make a determination of infringement outside of an infringement action without the proper parties and necessary evidence. The trial court concluded that there were reproductions that entitled Access Copyright to royalties—that there was infringement—without identifying any particular reproduction that was not fair dealing.
One example of why this approach is problematic is illustrated by the way the lower courts handled the “effect of the dealing” factor. The effect of the dealing factor is intended to consider the market impact of the defendant’s actions with respect to the plaintiff’s work on the sale of or royalties from that particular work. But instead, the lower courts looked to the effect on the market generally. This general market approach untethers the analysis from the economic interests of the specific authors of the works at issue, instead bringing in irrelevant evidence of copying by other institutions and of the impact of the copying on the sales of other works.
This was a mistake and the lower courts should have addressed only whether the reproduced work adversely affects or is likely to compete with the original work, not with the market generally. We asked the court to find that it was an error of law to consider this factor in aggregate and at a market-wide level, and to reaffirm the correct approach to the effect of the dealing factor is an investigation into the effect of a specific dealing on a specific work.
Authors Alliance is grateful to Sana Halwani for skillfully representing our interests at oral argument, and to the entire Lenczner Slaght team, including Paul-Erik Veel, Jacqueline Chan, and Anna Hucman, for pro bono assistance with this intervention. We will keep readers updated on the outcome of the case.
The case involves a claim by Access Copyright, a Canadian copyright collective, which seeks to have York University comply with an interim tariff approved by the Copyright Board of Canada for works in its collection. In response, York University brought a counterclaim seeking a declaration that its guidelines for copying materials for education purposes constituted “fair dealing” under the Copyright Act of Canada. The case raises the question of whether copyright collectives can force users to license content from them, even if the users prefer to comply with their copyright obligations in other ways.
As our factum explains, in the absence of specific allegations of copyright infringement from copyright owners, the lower courts should not have dealt with the issues of infringement and fair dealing. Because the lower courts did so without the proper plaintiffs, the result was a misguided approach to fair dealing that undermines users’ rights and the interests of many authors. Our factum also explains that even when their works are published under “all rights reserved” models, many of our members believe that their interests are best served with a robust application of fair use and fair dealing that does not unduly interfere with their dissemination goals, particularly in educational contexts.
On the issue of whether the approved tariffs are mandatory vis-à-vis users, our factum supports the Federal Court of Appeal’s finding that the approved tariffs bind copyright collectives but cannot be imposed on users as mandatory tariffs. We highlight some of the incoherent outcomes that would follow from the mandatory tariff theory, including the further marginalization of authors who are not a part of the collective’s repertoire.
The Supreme Court of Canada will hear oral arguments in the case on May 21, and Authors Alliance has been granted permission to present up to five minutes of oral arguments at the hearing. Authors Alliance is grateful to Lenczner Slaght attorneys Sana Halwani, Paul-Erik Veel, and Jacqueline Chan, as well as law clerk Anna Hucman, for pro bono assistance with this intervention.
Yesterday, Authors Alliance submitted a comment to the Copyright Office in response to a Notice of Inquiry regarding developing regulations to govern the copyright small claims procedure under the newly enacted CASE Act. In the past, we have spoken out in favor of a sensible copyright small claims process, but cautioned that the CASE Act could invite abuse and pose a high likelihood of harm to authors as both claimants and respondents. Now, Authors Alliance welcomes the opportunity to provide our feedback to the Copyright Office so it can work to ensure that the Copyright Claims Board (“CCB”), which will hear copyright small claims, is an efficient, effective, and respected forum, and moreover that it serves the individual authors and creators it is intended to benefit. We summarize our input below, and invite you to read our full comment for more detail.
One of the areas in which the Copyright Office requested input is the contents of the notices that will be sent to respondents when a claimant makes a claim against them before the CCB. The CASE Act mandates that these respondents be given an opportunity to opt-out of the proceedings and instead have the claim proceed in federal court, and requires the CCB or an entity acting on its behalf to send respondents two separate notices notifying the respondent of the claim against them. Regarding the contents of these notices, Authors Alliance advocated for clear, comprehensive, and informative notices which will convey to the recipients the nature of the CCB proceedings and the consequences of failing to opt out. We also requested that the CCB include information on these notices about why a respondent might want to opt in or opt out. If a respondent fails to opt out or appear before the CCB, they may be subject to a default judgment that is subject to limited review in federal court. It is our hope that if the Copyright Office implements our suggestions, notices will not be ignored, which could leave unwary respondents on the hook for damages.
The Copyright Office also sought guidance on the opt-out procedure for respondents who want to opt out of the CCB proceedings. Authors Alliance strongly urged the Office to make opting out as simple as possible for respondents of different levels of technical and legal sophistication. We suggested the Office allow respondents to opt out using a variety of methods: email, online form, over the phone, or by standard mail. We also encouraged the Office to develop a publicly available list of entities that intend to opt out in order to make the forum more efficient for claimants, who can check this list to see if the party they are pursuing a claim against indicates an intent to affirmatively opt out before filing a claim. Finally, we provided feedback on a special opt-out provision for libraries and archives, advocating for a robust opt-out provision that would allow libraries and archives to avoid the cost of defending excessive claims and spend their precious resources elsewhere to further the public good.
The Copyright Office also asked for input on whether and how to limit the number of cases that a given party can bring before the CCB over a calendar year. While Authors Alliance did not propose a specific threshold (which would be difficult if not impossible to do without knowing the what the CCB’s caseload will look like), we did commend the Office for its attention to this matter, and suggested that it impose meaningful limits on the number of cases that can be brought by a given party, with the overall goal of deterring unscrupulous actors while keeping the forum open and accessible to those who most need it—individual creators and authors seeking to enforce or defend their rights. We also suggested that the CCB implement regulations to deter “repeat players” from bringing repeated and ill-founded claims.
Guidelines on Unsuitability for the Forum and Award of Statutory Damages
Authors Alliance also suggested that the Copyright Office develop sets of guidelines for the CCB to use when determining whether a particular claim is appropriate for the forum and guidelines for the award of statutory damages. The CASE Act requires that the CCB dismiss claims that are not suitable for CCB adjudication, but does not provide much in the way of guidance as to how to determine whether the CCB is a suitable forum for a given claim. We suggested that the Office set guidelines to help the CCB determine whether a case is suitable for the proceedings, encouraging the CCB to dismiss complicated, fact-specific claims, and hear only straightforward infringement claims. Complicated, fact-specific issues like fair use are not appropriate for this streamlined procedure, and guidelines to this effect would go a long way to making the forum efficient and accessible. Regarding statutory damages, we suggested that the Copyright Office issue guidelines for the CCB to use when deciding whether to award statutory damages. In general, damages awards be proportional to the actual harm from the alleged infringement, rather than the maximum allowable damages under the statute ($15,000 per work and no more than $30,000 overall), which is often grossly disproportionate to lost licensing revenue the claimant would have received if the alleged infringer had obtained a license to use the work. We also suggested that the CCB should be particularly mindful to avoid statutory damages in cases of noncommercial uses.
At Authors Alliance, we care about fair use because it helps authors meet their goals of seeing their works shared broadly, facilitating the use of copyrighted works in some circumstances for certain specific purposes such as research, commentary, and teaching. Fair use also allows authors to use existing materials to strengthen their own research, commentary, and scholarship. We offer short summaries and takeaways from these cases here to keep you apprised of the goings on in copyright and offer some guidance on how these decisions might impact fair use cases more directly related to authors of literary works in the future.
Earlier this month, the Supreme Court issued its long-awaited decision in Google v. Oracle, a case that has been percolating in the lower courts for years, which concerned the question of whether Google’s unauthorized use of computer code to which Oracle held the copyright constituted fair use. In the case, Google was appealing a ruling by the U.S. Court of Appeals for the Federal Circuit, which had held that Google’s use of APIs (also referred to as “declaring code”) was not fair use, despite a jury reaching the opposite conclusion. Google appealed to the Supreme Court on the question of whether APIs were protected by copyright at all and, if so, whether Google’s use of the code was fair.
In a decision by Justice Breyer, the Court skirted the question of whether APIs were copyrightable, but overturned the Federal Circuit’s finding of infringement, holding that Google’s use of the APIs was fair use. To come to this determination, the Court considered the four factors involved in fair use determinations. It found that declaring code was functional in nature: unlike the more creative “implementing code” involved in designing Android (and written by Google), the Court viewed the declaring code as equivalent to “building blocks.”
The Court also found that Google’s use was transformative in purpose and character because it used Oracle’s declaring code, as well as its own computer code, to create a new platform offering “a new collection of tasks operating in a distinct and different computing environment.” The Court stated that this was sufficiently transformative to overcome the commercial nature of Google’s endeavor—the creation of the massively popular Android operating system. The Court further found that Google used a small quantity of Oracle’s code relative to the total code it used to create Android, overcoming arguments that the 11,500 lines of Oracle’s code that Google used was quite a substantial amount. Finally, the Court considered whether Google’s Android usurped a market Oracle could have otherwise profited from, and decided that Oracle was not well-positioned to develop a mobile platform at the time and that Google had not usurped its market.
For authors who care about the widespread dissemination of their works and contributing to the commons of knowledge, Google’s fair use victory may seem a hopeful sign. But there is reason to believe that the holding will be of limited applicability in the future: It is not clear that it even applies to all software copyright issues. The decision—and importance of details such as the number of lines of code that were actually copied—shows how fact-sensitive fair use is. And the Court’s vision of transformativeness in the context of computer code is not an easy fit for other contexts, creating uncertainty as to whether and how the case will affect authors and creators in the future.
In late March, the Second Circuit Court of Appeals issued its opinion in The Andy Warhol Foundation v. Goldsmith, a case concerning a series of screenprinted images created by artist Andy Warhol depicting the late musical artist (formerly known as) Prince, reproduced in court documents and referred to as the Prince series. The first image of Prince that Warhol created was commissioned by Vanity Fair, and was based on a photograph taken by plaintiff Lynn Goldsmith, a renowned celebrity photographer. All of this was authorized pursuant to agreements between Goldsmith and Vanity Fair and between Warhol and Vanity Fair. The Warhol image that appeared in Vanity Fair included credit lines for both Warhol— the artist—and Goldsmith—the photographer of the work upon which Warhol’s was based. But Warhol did not stop there— he created fourteen additional works in the same style, comprising the Prince series that was the subject of the litigation.
In the case, Goldsmith sued the Warhol Foundation for infringement in the New York district court, alleging that the Prince series infringed on her copyright in the photograph of Prince. The district court found for the Warhol Foundation on fair use grounds, focusing on the transformative nature of Warhol’s silkscreen prints, which it believed “transformed Prince from a vulnerable, uncomfortable person,” as he was presented in Goldsmith’s photograph, “to an iconic, larger-than-life figure[.]” Warhol’s works also changed the image of Prince from a black and white, three-dimensional representation to two dimensional, colorful representations. Goldsmith appealed the ruling to the Second Circuit, which overturned the district court’s finding of fair use.
The Second Circuit disagreed with the district court that Warhol’s images were transformative. In its view, the district court improperly took on “the role of art critic,” making an artistic determination that Warhol’s works were transformative, rather than comparing the elements of the images and their purposes and characters. Under this approach, the Second Circuit concluded that the work retained “essential elements” of Goldsmith’s photograph, and was functionally the same work with a new aesthetic.
Unlike the Google case, the narrow reading of transformativeness in Warhol v. Goldsmith can more readily be applied in other contexts where other creative works could be broken down into their elements and compared. The Warhol court was not the only one in recent months to constrain the so-called “transformative use test,” and courts are increasingly moving away from considering transformativeness subjectively, and towards examining elements of the two works more objectively. Yet the Google decision took a broader approach to fair use, and one which, as a Supreme Court case, will be more influential to courts across the country. The variations in treatment of fair use in general, and transformativeness specifically, show how fair use is a context-specific determination. Creators who would like to learn more about how fair use applies to the common situations they face can turn to our fair use guide for nonfiction authors and the best practices guides specific to other communities of creators.
Authors Alliance, joined by the Library Copyright Alliance and the American Association of University Professors, is petitioning the Copyright Office for a new three-year exemption to the Digital Millennium Copyright Act (“DMCA”) as part of the Copyright Office’s eighth triennial rulemaking process. If granted, our proposed exemption would allow researchers to bypass technical protection measures (“TPMs”) in order to conduct text and data mining research on literary works that are distributed electronically and motion pictures. Yesterday, Authors Alliance participated in public hearings hosted by the Copyright Office to consider the merits of the proposed exemption.
Text and data mining (“TDM”) refers to automated analytical techniques aimed at analyzing digital text and data in order to generate information that reveals patterns, trends, and correlations in that text or data. TDM has great potential to enable groundbreaking research and contribute to the commons of knowledge. As a highly transformative use of copyrighted works done for purposes of research and scholarship, TDM fits firmly within the ambit of fair use.
But TDM researchers are currently hindered by Section 1201 of the DMCA, which prohibits the circumvention of TPMs used by copyright owners to control access to their works. Section 1201 makes TDM research on texts and films time consuming and inefficient—and in some cases, impossible—working against the promotion of the progress of knowledge and the useful arts that copyright law has been designed to incentivize.
At yesterday’s hearing, the clinical team from the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley Law School representing Authors Alliance testified about the details of the exemption and its immense value for TDM researchers. The team explained how section 1201 prevents those researchers from creating the corpora of works they need to discover new insights from text and data mining, interfering with their ability to generate new copyrighted works that add to our cultural understanding and advance human knowledge.
Specifically, clinic students, Ziyad Alghamdi, Tait Anderson, and Erin Moore, and clinical supervisor, Professor Erik Stallman, shared how section 1201’s prohibitions chill new research and hinder the progress of knowledge in at least three ways: 1) forcing researchers to limit datasets in a way that makes their findings less illuminating than they would otherwise be, 2) causing researchers to artificially constrain research to public domain texts, and 3) leading researchers to abandon potential TDM projects altogether.
Opponents of the exemption testifying in the hearing—representing publishers, the software industry, and content licensing organizations— raised concerns about whether TDM was fair use under copyright law, whether the proposed security measures for the TDM corpora were sufficient to allay their security concerns, and whether alternatives like pre-assembled TDM corpora would be adequate for TDM researchers.
Regarding fair use, Erin Moore testified that relevant case law firmly establishes TDM as a fair use, and that the fact that the use could have been officially licensed by the copyright holder does not mean the use is not a fair one. Moore also emphasized the noncommercial and educational nature of the uses TDM researchers seek to make under this exemption, classic features of fair use. To address opponents’ security concerns, Tait Anderson explained that “reasonable security measures” as used in our petition was concrete enough to require researchers to take precautions to prevent against public dissemination and unauthorized sharing, while not being overly prescriptive in order to accommodate a wide range of TDM projects with different levels of sensitivity in the underlying data. On the topic of existing alternatives to the TDM corpora the TDM researchers seek to compile, Ziyad Alghamdi highlighted the limitations of commercial TDM databases like Hathitrust, which are both limited in the scope of works they contain and how TDM research can be conducted using these works. TDM researchers are seeking this exemption in part because these databases are costly, difficult to use, and incomplete for answering research questions about contemporary literary works and films.
Other topics discussed during the lively hearing included whether the proposed exemption should align with similar carve outs for TDM research in Europe and Japan, how sharing corpora with affiliated researchers for peer review purposes might work, and how and whether literary works and films should be analyzed differently for the purposes of the exemption. The Librarian of Congress is expected to issue a final decision on the proposed exemption in October 2021. We will keep our members and readers apprised of any updates on our proposed exemption as the process moves forward. We’re grateful to law students from the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley Law School for their work supporting our petition for this new exemption.
On March 17, 2020, the American Library Association (“ALA”) recommended that public libraries across the country close in response to the challenges posed by the COVID-19 pandemic. That same day, publishing conglomerate Macmillan (one of the so-called “Big Five” publishers that dominate much of the trade book market) announced it would end a controversial embargo on sales of e-books to libraries, also stating its intention to temporarily lower prices on some library e-book licenses “to help expand libraries collections in these difficult times.”
One year later, many libraries remain shuttered or have scaled back their hours, services, and capabilities. Yet e-book lending has skyrocketed, as e-books can be checked out by patrons from the safety of their homes. Libraries have adapted to this increased demand in a variety of ways despite limited resources and budgets. By increasing digital offerings with a special emphasis on making e-book lending available to patrons, libraries have pivoted to serve the needs of a community forced by external circumstances to turn to the internet for information, culture, and human connection.
Library E-Book Lending in the “Before Time”
Prior to the start of the pandemic, a dispute between publishers and libraries on the subjects of e-book pricing and availability to patrons had been quietly simmering. Between 2018 and 2019, four of the Big Five publishers changed licensing terms and raised prices of e-books for libraries. And the bookselling giant Amazon, which has launched its own publishing operations under the name “Amazon Publishing,” has taken an even harsher approach to e-book library lending: it refuses to sell its titles to libraries altogether. In a statement to the Washington Post, a representative from Amazon Publishing stated that it was “not clear to us that current digital library lending models fairly balance the interests of authors and library patrons[.]”
In general, libraries are able to loan out e-books because they acquire licenses to do so. Typically, a copy cannot be checked out by more than one patron at a time and only for a set number of times (with 26 and 52 checkouts being most common), and the licenses may also be limited duration, typically one to two years. Moreover, libraries pay up to five times more for e-books than consumers do. This custom reflects the fact that a library lends each e-book out multiple times, with multiple end readers rather than the single user who buys an e-book from Amazon or the iBook store. But libraries are typically charged the same price for physical books as are consumers, creating an imbalance in access across the two formats. This imbalance has become all the more salient during the pandemic due to the limitations on access to physical books and the budgetary constraints that are felt around the country.
By 2018, 90% of American libraries offered digital loans. As e-book library lending increased in popularity, publishers argued that the popularity of library e-book lending led to reduced profits. In 2019, Macmillan revealed that its revenue per library e-book read was down to “two dollars and dropping,” apparently “a small fraction” of what it makes on consumer purchases. Macmillan and other large publishers complained that the “frictionless” nature of e-book lending means that readers can acquire e-books with the same relative ease as purchasing those e-books. But there is reason to believe the fear that library e-book lending hurts e-book sales is ill-founded—in the first 10 months of 2020, when library e-book checkouts began to increase dramatically, the American Association of Publishers reported that e-book sales had increased by over 16% rather than dropping as more readers turned to library e-books.
Library E-Book Lending During the Pandemic
During the pandemic, library e-book lending increased manifold across the country. In April 2020, the Congressional Research Service reported that demand for e-books (both from libraries and readers who purchased e-books) had increased significantly, and that libraries and organizations were searching for lending models to address this increased demand. OverDrive, the nation’s leading e-book lending platform and maker of the “Libby” library lending app, saw checkouts increase by over 50% during the early months of the pandemic, and many individuallibrarysystems similarly saw large increases in e-book checkouts. New library partnerships with hoopla, another leading lending platform, have resulted in a 20% increase in membership for the platform. At the most basic level, this uptick in demand is not difficult to understand: without access to physical library spaces, e-book lending became for many patrons the best option to continue to access works at their local libraries.
To keep up with the increasing demand for e-book loans and better meet patrons where they are, libraries have adapted their programs and procedures to make e-book checkouts more accessible. Libraries began by investing in more e-book licenses and increasing spending on “digital resources.” As the pandemic progressed, libraries around the country began allowing patrons to apply for and obtain library cards online so that new patrons could access e-book offerings. Library systems have also increased investments in new e-book licensing models, such as the “concurrent use model,” which allow libraries to license a “bundle” of loans to meet high demand that do not expire. This model is particularly attractive for public school students, and it has been used to facilitate access to texts during remote learning. Another lending model that has increased in popularity during the pandemic is the deployment of “skip-the-line” or “lucky” copies of new and popular titles. This system allows patrons to choose to check out an e-book for a shorter checkout window, but to avoid long waitlists that can plague popular titles available for regular check out. And this summer, libraries worked to support patrons grappling with racial injustice following the killing of George Floyd and protests across the country by working with OverDrive to offer extended checkouts for books on anti-racism.
Publishers have also adapted their e-book license terms to be more library- and reader-friendly, recognizing the importance of library lending for the American public. By the end of March 2020, all of the Big Five publishers had announced relaxations of their e-book license terms, reducing prices on e-books for libraries by up to 50% and developing “cost per circulation” catalogues that allowed libraries to pay fees per e-book loan for certain titles rather than requiring an upfront payment for a license of limited duration. But these measures were largely intended to be temporary to help libraries struggling to meet their patrons’ needs during the pandemic, and where library e-book lending will go from here is uncertain.
An Uncertain Future for Library E-Book Lending
While progress has been made towards making knowledge and culture more accessible through relaxing barriers to entry for e-book library lending, it is unclear whether publishers and other intermediaries will return to the state of play prior to the pandemic.
Recognizing the need for fair and balanced license terms for library e-books, several states have introduced legislation mandating that publishers must offer libraries e-books that are available to retail consumers, and must do so on “reasonable terms.” And, in Maryland, such a bill was recently approved unanimously by the state legislature, and is currently awaiting final approval by the governor. Amazon Publishing, which until recently refused to budge on its ban on selling e-books to libraries, is reportedly in talks with the Digital Public Library of America to make Amazon Publishing titles available to libraries across the country through DPLA’s lending platform. ReadersFirst, a library organization that advocates for library users’ ability to use loaned e-books in the way they use print books, is optimistic that other publishers may follow suit and work to make their e-books more accessible to libraries and their patrons.
Authors Alliance, joined by the Library Copyright Alliance and the American Association of University Professors, is petitioning the Copyright Office for a new three-year exemption to the Digital Millennium Copyright Act (“DMCA”) as part of the Copyright Office’s eighth triennial rulemaking process. If granted, our proposed exemption would allow researchers to bypass technical protection measures (“TPMs”) in order to conduct text and data mining research on literary works that are published electronically and motion pictures. This week, we responded to commenters who opposed the petition for this exemption.
Text and data mining (“TDM”) refers to automated analytical techniques aimed at analyzing digital text and data in order to generate information that reveals patterns, trends, and correlations in that text or data. TDM has great potential to enable groundbreaking research and contribute to the commons of knowledge. As a highly transformative use of copyrighted works done for purposes of research and scholarship, TDM fits firmly within the ambit of fair use.
But TDM researchers are currently hindered by Section 1201 of the DMCA, which prohibits the circumvention of TPMs used by copyright owners to control access to their works. Section 1201 makes TDM research on texts and films time consuming and inefficient—and in some cases, impossible—working against the promotion of the progress of knowledge and the useful arts that copyright law has been designed to incentivize. What’s more, Section 1201’s prohibitions force some TDM scholars to focus on works first published before 1926, which are in the public domain. Because authorship was far less diverse then than it is today, focusing TDM on pre-1926 texts privileges white male voices rather than being representative of authors contributing to the commons of knowledge today. For these reasons, our petition and supporting comments ask the Librarian of Congress to grant a new exemption to Section 1201’s anti-circumvention prohibitions that would allow researchers to bypass TPMs on e-books and films for the purpose of conducting TDM research.
Last month, four comments were submitted in opposition to our proposed new exemption, raising concerns about the scope of activities and works that would be covered by the exemption, the intended beneficiaries of the exemption, and security measures for databases of decrypted copies of copyrighted works.
This week, we responded to these comments, explaining that the concerns about the scope of activities and works covered could be addressed by clarifying the bounds of the exemption. We explained the exemption was intended to cover using text and data mining techniques for the purposes of scholarly research and teaching only. With regards to the scope of the works covered, we specified that “literary works,” as used in our petition, would exclude computer programs. Both of these clarifications were made with an aim towards allaying commenters’ concerns about the exemption’s breadth.
We also clarified that the intended beneficiaries of the exemption were “researcher[s] affiliated with a nonprofit library, archive, museum, or institution of higher education[,]” explaining that the exemption’s proponents were not commercial actors, nor were the other intended beneficiaries. Finally, we addressed commenters’ security concerns by explaining that the exemption will require researchers to take “reasonable security measures” to ensure that there is no unauthorized access, noting that the requirement of institutional affiliation will facilitate such security measures.
Next month, we anticipate participating in public hearings hosted by the Copyright Office to consider the merits of the proposed exemptions. We look forward to continuing to work with opposition commenters to address their concerns and with the Copyright Office as it evaluates our petition for this new exemption to facilitate TDM research.
The Librarian of Congress is expected to issue a final decision on the proposed exemption in October 2021. We will keep our members and readers apprised of any updates on our proposed exemption as the process moves forward. We’re grateful to law students from the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley Law School for their work supporting our petition for this new exemption.