Author Archives: Dave Hansen

Antitrust Lawsuit Filed Against Large Academic Publishers

Posted September 17, 2024

On September 12, a San Francisco-based law firm filed an antitrust lawsuit on behalf of UCLA professor Lucina Uddin against six prominent academic publishers and the trade association that represents them: Elsevier, John Wiley & Sons, Sage Publications, Springer Nature, Taylor & Francis, Wolters Kluwer, and the International Association of Scientific, Technical, and Medical Publishers (“STM”). The suit is brought on behalf of a class that it defines as “All natural persons residing in the United States who performed peer review services for, or submitted a manuscript for publication to, any of the Publisher Defendants’ peer-reviewed journals from September 12, 2020 to the present.” The complaint lists just one claim for relief: that “Publisher Defendants and their co-conspirators entered into and engaged in unlawful agreements in restraint of the trade and commerce described above in violation of Section 1 of the Sherman Act, 15 U.S.C. § 1.” 

To support this claim, the plaintiff makes three key allegations. Namely, that the publishers have illegally agreed amongst each other to abide by: 

  1. a “Single Submission Rule,” where researchers are only allowed to submit a manuscript to one journal for consideration unless the journal rejects it;
  2. a “Unpaid Peer Review Rule,” where journals implement policies to not compensate peer reviewers for their labor; and
  3. a “Gag Rule,” where researchers are not allowed to share or discuss their manuscript once they have submitted it to a journal for consideration before the journal publishes it.

Why would any of these actions constitute an antitrust violation? We thought a little background could be helpful: 

To understand this lawsuit, we must first consider the purpose of U.S. antitrust law. The fundamental goal of antitrust law is to encourage competition and ultimately to promote consumer welfare. The Supreme Court explains that: “Congress designed the Sherman Act as a consumer welfare prescription.” 

Section 1 of the Sherman Antitrust Act does this by prohibiting “[e]very contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade.” This generally requires proving two things: (1) some sort of agreement or business arrangement, and (2) that this agreement is “in restraint of trade,” i.e., unreasonably harmful to competition.  

Proving an agreement can sometimes be a complicated factual question, though often there are good clues, especially when joint activity is coordinated through a trade association (antitrust lawyers love to quote Adam Smith on trade associations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”)  In this case, the plaintiff says that the agreements are so obvious that they are in fact published openly in several portions of the STM’s “International Ethical Principles for Scholarly Publication” which are then implemented and enforced by each publisher.  

Proving the second part, that the agreement is in restraint of trade or unreasonably harmful to competition can be more complicated. The courts have developed three different analytical frameworks for evaluating whether conduct harms competition in this context:

  1. A “per se” rule for agreements that are “nakedly” anticompetitive. Examples include agreements to fix prices (what’s alleged here, at least with respect to payment for peer review), bid-rigging, agreements to divide markets, and a few other kinds of less common agreements. 
  2. A “rule of reason” test—which applies in most cases—that weighs the pro-competitive effects of the agreement against anti-competitive effects and prohibits agreements only when the anti-competitive harms outweigh the benefits.
  3. “Intermediate” or “quick-look” scrutiny, which covers a small number of cases in which agreements look suspicious on their face but are not so obviously anticompetitive as to fall under the “per se” rule. 

The plaintiff’s complaint claims that the publishers’ agreements violate the Sherman Act regardless of which of these three tests apply.  But often, some of the most significant battles in antitrust lawsuits are about which of these standards apply since the costs associated with litigating a suit can change dramatically depending on which is used. If the court accepts that the “per se” standard applies, the plaintiff likely wins. If the “quick-look” rule applies, the burden is on the defendant to show that its conduct is not anticompetitive. But if the “rule of reason” standard applies, the suit will likely involve extensive discovery, expert witnesses, and other factual evidence about what exactly the market is, whether the defendants had sufficient market power to negatively affect competition, and whether the agreement would negatively or positively affect competition. 

In these cases, defining the relevant market is often crucial. In most antitrust cases, defining the relevant market involves identifying substitutes for the product under review. On this point, the plaintiff argues:  

 “Publication by peer-reviewed journals is a relevant antitrust market. For scholars who seek to communicate their scientific research, there is no adequate substitute for publication in a peer-reviewed journal. Peer-reviewed journals establish the validity of scientific research through the peer review process, communicate that research to the scientific community, and avoid competing claims to the same scientific discovery.”

For market definition, it is important to include only close substitutes and exclude those that are distant. For academic publishing, even though there are many ways authors can share their manuscripts—from emailing their colleagues, to posting on their personal websites, to publishing with upstart new journals—the fact remains that the journals in question are the primary means of dissemination and are the most heavily read and heavily cited. So at first glance, the complaint seems realistic in its formulation of the market at issue. 

The complaint then goes on to argue that the publishers hold significant power in this market and misuse that power, touching on familiar themes such as how these academic publishers extract significant profits, charge high rates for access and increasingly high fees for authors to publish openly, and so on. The plaintiffs allege that this market power has allowed the publishers to make agreements amongst each other (the three allegations noted above) in ways that allow them to maximize profits while also maintaining their market power. We note that it’s true that there may be some other explanations for these practices—in fact, some authors may be proponents of some of them, for example, the single-submission rule. But even if there are other explanations for these rules, with the current arrangement agreed through STM, the allegation is that no member publisher will even try to compete or develop other approaches that may drive up the price for peer reviewers, compete for placement of papers, etc.

How this lawsuit will turn out is hard to predict. This lawsuit flags some very problematic practices enforced on the academic publishing industry by prominent publishers. But there are many other problems with the academic publishing industry not discussed in the complaint. We’ve long thought that the public-interest nature of academic research and publishing is complicated when paired with commercial publishers who have strong incentives to maximize profits. Of course, even for-profit firms are expected to operate within the law; profit-maximizing to the point of adopting anti-competitive practices is fundamentally at odds with their essential social responsibilities.

Authors Alliance and SPARC Supporting Legal Pathways to Open Access for Scholarly Works

Posted August 27, 2024

Authors Alliance and SPARC are excited to announce a new collaboration to address critical legal issues surrounding open access to scholarly publications. 

One of our goals with this project is to clarify legal pathways to open access in support of federal agencies working to comply with the Memorandum on “Ensuring Free, Immediate, and Equitable Access to Federally Funded Research,” (the “Nelson Memo”) which was issued by the White House’s Office of Science and Technology Policy in 2022. For more than a decade, federal open access policy was based on an earlier memo instructing federal agencies with research and development budgets over $100 million to make their grant-funded research publicly accessible for free online. The Nelson Memo, drawing from lessons learned during the COVID-19 Pandemic, provides important updates to the prior policy. Among the key changes are extending the requirements to all agencies, regardless of budget, and eliminating the 12-month post-publication embargo period on articles. 

The Nelson Memo raises important legal questions for agencies, universities, and individual researchers to consider. To help ensure smooth implementation of the Nelson Memo, we plan to produce a series of white papers addressing these questions. For example, a central issue is the nature and extent of the pre-existing license, known as the “Federal Purpose License,” which all federal grant-making agencies have in works produced using federal funds.  The white papers will outline the background and history of the License, and also address commonly raised questions, including whether the License would support the application of Creative Commons or other public licenses; possible constitutional or statutory obstacles to the use of the License for public access; whether the License may apply to all versions of a work; and whether the use of the License for public access would require modification of university intellectual property policies. 

In addition to the white paper series, we plan to convene a group of experts to update the SPARC Author Addendum. The Addendum was created in 2007 and has been an extremely useful tool in educating authors on how to retain their rights, both to provide open access to their scholarship and to allow for wide use of their work. However, in the nearly two decades since its creation, models for open access and scholarly publishing have changed dramatically. We aim to update the Addendum to more closely reflect the present open access landscape and to help authors to better achieve their scholarship goals.

A final piece of the project is to develop a framework for universities looking to recover rights for faculty in their works, particularly backlist and out-of-print books that are unavailable in electronic form. Though the open access movement has made significant strides in advancing free availability and reuse of scholarly articles, that progress has generally not extended to books and other monographic works, in part because of the non-standard and often complicated nature of book publishing licenses. It has also not done as much to open backfile access to older journal articles. We think a framework for identifying opportunities to recover rights and relicense them under an open access license will help advance open access of these works.

Eric Harbeson

The project will be spearheaded by Eric Harbeson, who joined the Authors Alliance this week as Scholarly Publications Legal Fellow. Eric is a recent graduate of the University of Oregon School of Law. Prior to law school, Eric had a dual career as a librarian/archivist and a musicologist. Eric did extensive work advocating for libraries’ and archives’ copyright interests, especially with respect to preservation of music and sound recordings. Eric’s publications include a well-regarded report on the Music Modernization Act, as well as two scholarly music editions. Eric can be reached at eric@authorsalliance.org.

Clickbait arguments in AI Lawsuits (will number 3 shock you?)

Posted August 15, 2024

Image generated by Canva

The booming AI industry has sparked heated debates over what AI developers are legally allowed to do. So far, we have learned from the US Copyright Office and courts that AI created works are not protectable, unless it is combined with human authorship. 

As we monitor two dozen ongoing lawsuits and regulatory efforts that address various aspects of AI’s legality, we see legitimate legal questions that must be resolved. However, we also see some prominent yet flawed arguments that have been used to enflame discussions, particularly by publisher-plaintiffs and their supporters. For now, let’s focus on some clickbait arguments that sound appealing but are fundamentally baseless. 

Will AI doom human authorship?

Based on current research, AI tools can actually help authors improve creativity, productivity, as well as the longevity of their career

When AI tools such as ChatGPT first appeared online, many leading authors and creators publicly endorsed it as a useful tool like any other tech innovation that came before it. At the same time, many others claimed that authors and creators of lesser caliber will be disproportionately disadvantaged by the advent of AI. 

This intuition-driven hypothesis, that AI will be the bane of average authors, has so far proved to be misguided.

We now know that AI tools can greatly help authors during the ideation stage, especially for less creative authors. According to a study published last month, AI tools had minimal impact on the output of highly creative authors, but were able to enhance the works of less imaginative authors. 

AI can also serve as a readily-accessible editor for authors. Research shows that AI enhances the quality of routine communications. Without AI-powered tools, a less-skilled person will often struggle with the cognitive burden of managing data, which limits both the quality and quantity of their potential output. AI helps level the playing field by handling data-intensive tasks, allowing writers to focus more on making creative and other crucial decisions about their works. 

It is true that entirely AI-generated works of abysmal quality are available for purchase on some platforms. Some of these works are using human authors’ names without authorization. These AI-generated works may infringe on authors’ right of publicity, but they do not present commercially-viable alternatives to books authored by humans. Readers prefer higher-quality works produced with human supervision and interference (provided that digital platforms do not act recklessly towards their human authors despite generating huge profits from human authors).

Are lawsuits against AI companies brought with authors’ best interest in mind? 

In the ongoing debate over AI, publishers and copyright aggregators have suggested that they have brought these lawsuits to defend the interests of human authors. Consider the New York Times for example, in its complaint against OpenAI, NY Times describes their operations as “a creative and deeply human endeavor (¶31)” that necessitates “investment of human capital (¶196).” NY Times argues that OpenAI has built innovation on the stolen hard work and creative output from journalists, editors, photographers, data analysts, and others—an argument contrary to what the NY Times once argued in court in New York Times v. Tasini,  that authors’ rights must take a backseat to NY Times’ financial interests in new digital uses.  

It is also hard to believe that many of the publishers and aggregators are on the side of authors when we look at how they have approached licensing deals for AI training. These licensing deals can be extremely profitable for the publishers. For example, Taylor and Francis sold AI training data to OpenAI for 10 million USD. John Wiley and Sons earned $23 million from a similar deal with a non-disclosed tech company. Though we don’t have the details of these agreements, it seems easy to surmise that in return for the money received, the publishers will not harass the AI companies with future lawsuits. (See our previous blog post about these licensing deals and what you can do as an author.) It is ironic how an allegedly unethical and harmful practice quickly becomes acceptable once the publishers are profiting from it.

How much of the millions of dollars changing hands will go to individual authors? Limited data exist. We know that Cambridge University Press, a good-faith outlier, is offering authors 20% royalties if their work is licensed for AI training. Most publishers and aggregators are entirely opaque about how authors are to be compensated in these deals. Take the Copyright Clearance Center (CCC) for example, it offers zero information about how individual authors are consulted or compensated when their works are sold for AI training under CCC AI training license.

This is by no means a new problem for authors. We know that traditionally-published book authors receive around 10% of royalties from their publishers: a little under $2 per copy for most books. On an ebook, authors receive a similar amount for each “copy” sold. This little amount handed to authors only starts to look generous when compared to academic publishing, where authors increasingly pay publishers to have their articles published in journals. The journal authors receive zero royalties, despite the publishers’ growing profit

Even before the advent of AI technology, most authors were struggling to make a living on writing alone. According to an Authors Guild’s survey in 2018, the median income for full-time writers was $20,300, and for part-time writers, a mere $6,080. Fair wage and equitable profit sharing is an issue that needs to be settled between authors and publishers, even if publishers try to scapegoat AI companies. 

It’s worth acknowledging that it’s not just publishers and copyright industry organizations filing these lawsuits. Many of these ongoing lawsuits have been filed as class actions, with the plaintiffs claiming to represent a broad class of people who are similarly situated and (thus they alleged) hold similar views. Most notably, in Authors Guild v. OpenAI, Authors Guild and its named individual plaintiffs claim to represent all fiction writers in the US who have sold more than 5000 copies of a work. There’s also another case where plaintiff claims to represent all copyright holders of non-fiction works, including authors of academic journal articles, which got support from Authors Guild, and several others in which an individual plaintiff asserts the right to represent virtually all copyright holders of any type

As we (along with many others) have repeatedly pointed out, many authors disagree with the publishers and aggregators’ restrictive view on fair use in these cases, and don’t want or need a self-appointed guardian to “protect” their interests.  We have seen the same over-broad class designation in the Authors Guild v. Google case, which caused many authors to object, including many of our own 200 founding members.

Respect for copyright and human authors’ hard work means no more AI training under US copyright law? 

While we wait for courts to figure out the key questions on infringement and fair use, let’s take a moment to remember what copyright law does not regulate.

Copyright law in the US exists to further the Constitutional goal to “promote the Progress of Science and useful Arts.” In 1991, the Supreme Court held in Feist v. Rural Telephone Service that copyright cannot be granted solely based on how much time or energy authors have expended. “Compensation for hard work“ may be a valid ethical discussion, but it is not a relevant topic in the context of copyright law.

Publishers and aggregators preach that people must “respect copyright,” as if copyright is synonymous with the exclusive rights of the copyright holder. This is inaccurate and misleading. In order to safeguard the freedom of expression, copyright is designed to embody not only the rightsholders’ exclusive rights but also many exceptions and limitations to the rightsholders’ exclusive rights. Similarly, there’s no sound legal basis to claim that authors must have absolute control over their own work and its message. Knowledge and culture thrives because authors are permitted to build upon and reinterpret the works of others

Does this mean I should side with the AI companies in this debate?

Many of the largest AI companies exhibit troubling traits that they have in common with many publishers, copyright aggregators, digital platforms (e.g., Twitter, TikTok, Youtube, Amazon, Netflix, etc.), and many other companies with dominant market power. There’s no transparency or oversight afforded to the authors or the public. The authors and the public have little say in how the AI models are trained, just like how we have no influence over how content is moderated on digital platforms, how much royalties authors receive from the publishers, or how much publishers and copyright aggregators can charge users. None of these crucial systematic flaws will be fixed by granting publishers a share of AI companies’ revenue. 

Copyright also is not the entire story. As we’ve seen recently, there are some significant open questions about the right of publicity and somewhat related concerns about the ability of AI to churn out digital fakes for all sorts of purposes, some of which are innocent, but others are fraudulent, misleading, or exploitative. The US Copyright Office released a report on digital replicas on July 31 addressing the question of digital publicity rights, and on the same day the NO FAKES Act was officially introduced. Will the rights of authors and the public be adequately considered in that debate? Let’s remain vigilant as we wait to see the first-ever AI-generated public figure in a leading role to hit theaters in September 2024.

Book Talk: Governable Spaces

Join us for a book talk with author NATHAN SCHNEIDER. Discover how we can transform digital spaces into more democratic and creative environments, inspired by governance legacies of the past. UCSD professor and author LILLY IRANI will lead our discussion.

Book Talk: Governable Spaces
Thursday, August 22 @ 10am PT / 1pm ET
Register now for the free, virtual event

“A prescient analysis of how we create democratic spaces for engagement in the age of polarization. Governable Spaces is new, impeccably researched, and imaginative.”
—Zizi Papacharissi, Professor of Communication and Political Science, University of Illinois at Chicago

When was the last time you participated in an election for a Facebook group or sat on a jury for a dispute in a subreddit? Platforms nudge users to tolerate nearly all-powerful admins, moderators, and “benevolent dictators for life.” In Governable Spaces, Nathan Schneider argues that the internet has been plagued by a phenomenon he calls “implicit feudalism”: a bias, both cultural and technical, for building communities as fiefdoms. The consequences of this arrangement matter far beyond online spaces themselves, as feudal defaults train us to give up on our communities’ democratic potential, inclining us to be more tolerant of autocratic tech CEOs and authoritarian tendencies among politicians. But online spaces could be sites of a creative, radical, and democratic renaissance. Using media archaeology, political theory, and participant observation, Schneider shows how the internet can learn from governance legacies of the past to become a more democratic medium, responsive and inventive unlike anything that has come before.

REGISTER NOW

ABOUT OUR SPEAKERS

NATHAN SCHNEIDER is an assistant professor of media studies at the University of Colorado Boulder, where he leads the Media Economies Design Lab and the MA program in Media and Public Engagement. He is the author of four books, most recently Governable Spaces: Democratic Design for Online Life, published by University of California Press in 2024, and Everything for Everyone: The Radical Tradition that Is Shaping the Next Economy, published by Bold Type Books in 2018.

LILLY IRANI is an Associate Professor of Communication & Science Studies at University of California, San Diego where she is Faculty Director of the UC San Diego Labor Center. She is author of Chasing Innovation: Making Entrepreneurial Citizens in Modern India (Princeton University Press, 2019) and Redacted (with Jesse Marx) (Taller California, 2021). She organizes with Tech Workers Coalition San Diego. She serves on the steering committee of the Transparent and Responsible Use of Surveillance Technology (TRUST) SD Coalition and the board of United Taxi Workers San Diego. She is co-founder of data worker organizing project and activism tool Turkopticon.

Book Talk: Governable Spaces
Thursday, August 22 @ 10am PT / 1pm ET
Register now for the free, virtual event

Some Initial Thoughts on the US Copyright Office Report on AI and Digital Replicas

Posted August 1, 2024

On July 31, 2024, the U.S. Copyright Office published Part 1 of its report summarizing the Office’s ongoing initiative of artificial intelligence. This first part of the report addresses digital replicas, in other words, how AI is used to realistically but falsely portray people in digital media. The Office in its report recommends new federal legislation that would create a new right to control “digital replicas” which it defines as  “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.”

We remain somewhat skeptical that such a right would do much to address the most troubling abuses such as deepfakes, revenge porn, and financial fraud. But, as the report points out, a growing number of varied state legislative efforts are already in the works, making a stronger case for unifying such rules at the federal level, with an opportunity to ensure adequate protections are in place for creators.  

The backdrop for the inquiry and report is a fast-developing space of state-led legislation, including legislation with regard to deepfakes. Earlier this year, Tennessee became the first state to enact such a law, the ELVIS Act (TN HB 2091), while other states mostly focused on addressing deepfakes in the context of sexual acts and political campaigns. New state laws are continuing to be introduced, making it harder and harder to navigate the space for creators, AI companies, and consumers alike. A federal right of publicity in the context of AI has already been discussed in Congress, and just yesterday a new bill was formally introduced, titled the “NO AI Fakes Act.” 

Authors Alliance has watched the development of this US Copyright Office initiative closely. In August 2023, the Office issued a notice of inquiry, asking stakeholders to weigh in on a series of questions about copyright policy and generative AI.  Our comment in response to the inquiry was devoted in large part to sharing the ways that authors are using generative AI, how fair use should apply to training AI, and that the USCO should be cautious in recommending new legislation to Congress

This report and recommendation from the Copyright Office could have a meaningful impact on authors and other creators, including both those whose personality and images are subject to use with AI systems, and those who are actively using AI in the writing and research. Below are our preliminary thoughts on what the Copyright Office recommends, which it summarizes in the report as follows: 

“We recommend that Congress establish a federal right that protects all individuals during their lifetimes from the knowing distribution of unauthorized digital replicas. The right should be licensable, subject to guardrails, but not assignable, with effective remedies including monetary damages and injunctive relief. Traditional rules of secondary liability should apply, but with an appropriately conditioned safe harbor for OSPs. The law should contain explicit First Amendment accommodations. Finally, in recognition of well-developed state rights of publicity, we recommend against full preemption of state laws.”

Initial Impressions

Overall, this seems like a well-researched and thoughtful report, given that the Office had to navigate a huge number of comments and opinions (over 10,000 comments were submitted). The report also incorporates the many more recent developments that included numerous new state laws and federal legislative proposals.  

Things we like: 

  • In the context of an increasing number of state legislative efforts—some overbroad and more likely than not to harm creators than help them—we appreciate the Office’s recognition that a patchwork of laws can pose a real problem for users and creators who are trying to understand their legal obligations when using AI that references and implicates real people.
  • The report also recognizes that the collection of concerns motivating digital replica laws—things like control of personality, privacy, fraud, and deception—are not at their core copyright concerns. “Copyright and digital replica rights serve different policy goals; they should not be conflated.” This matters a lot for what the scope of protection and other details for a digital replica right looks like. Copy-pasting copyright’s life+70 term of protection, for example, makes little sense (and the Office recognizes this, for example, by rejecting the idea of posthumous digital replica rights). 
  • The Office also suggests limiting the transferability of rights. We think this is a good idea to protect individuals from unanticipated downstream use by companies that may persuade individuals to sign deals that would lock them into unfavorable long-term deals. “Unlike publicity rights, privacy rights, almost without exception, are waivable or licensable, but cannot be assigned outright. Accordingly, we recommend a ban on outright assignments, and the inclusion of appropriate guardrails for licensing, such as limitations in duration and protection for minors.” 
  • The Office explicitly rejects the idea of a new digital replica right covering “artistic style.” We agree that protection of artistic style is a bad idea. Creators of all types have always used existing styles and methods as a baseline to build upon, and it’s resulted in a rich body of new works. Allowing for control over “style” however well-defined, would impinge on these new creations. Strong federal protection over “style” would also contradict traditional limitations on rights, such as Section 102(b)’s limits on copyrightable subject matter and the idea/expression dichotomy, which are rooted in the Constitution. 

Some concerns: 

  • The Office’s proposal would apply to the distribution of digital replicas, which are defined as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.” This definition is quite broad and could potentially include a large number of relatively common and mostly innocuous uses—e.g., taking a photo with your phone of a person and applying a standard filter on your camera app could conceivably fall within the definition. 
  • First Amendment rights to free expression are critical for protecting uses for news reporting, artistic uses, parody and so on. Expressive uses of digital replicas—e.g., a documentary that uses AI to replicate a recent event involving recognizable people, or reproduction in a comedy show to to poke fun at politicians—could be significantly hindered by an expansive digital replica right unless it has robust free expression protections. Of course, the First Amendment applies regardless of the passing of a new law, but it will be important for any proposed legislation to find ways to allow people to exercise those rights effectively. As the report explains, comments were split. Some like the Motion Picture Association proposed enumerated exceptions for expressive use, while others such as the Recording Industry Association of America took the position that “categorical exclusions for certain speech-oriented uses are not constitutionally required and, in fact, risk overprotection of speech interests at the expense of important publicity interests.” 

We tend to think that most laws should skew toward “overprotection of speech interests,” but the devil is in the details on how to do so. The report leaves much to be desired on how to do this effectively in the context of digital replicas. For its part, “[t]he Office stresses the importance of explicitly addressing First Amendment concerns. While acknowledging the benefits of predictability, we believe that in light of the unique and evolving nature of the threat to an individual’s identity and reputation, a balancing framework is preferable.” One thing to watch in future proposals is what such a balancing framework actually includes, and how easy or difficult it is to assert protection of First Amendment rights under this balancing framework. 

  • The Office rejects the idea that Section 230 should provide protection for online service providers if they host content that runs afoul of the proposed new digital replica rights. Instead, the Office suggests something like a modified version of the Copyright Act’s DMCA section 512 notice and takedown process. This isn’t entirely outlandish—the DMCA process mostly works, and if this new proposed digital replica right is to be effective in practice, asking large service providers that are benefiting from hosting content to be responsive in cases of alleged infringing content may make sense. But, the Office says that it doesn’t believe the existing DMCA process should be the model, and points to its own Section 512 report for how a revised version for digital replicas might work. If the Office’s 512 study is a guide to what a notice-and-takedown system could look like for digital replicas, there is reason to be concerned.  While the study rejected some of the worst ideas for changing the existing system (e.g., a notice-and-staydown regime), it also repeatedly diminished the importance of ideas that would help protect creators with real First Amendment and fair use interests. 
  • The motivations for the proposed digital replica right are quite varied. For some commenters, it’s an objection to the commercial exploitation of public figures’ images or voices. For others, the need is to protect against invasions of privacy. For yet others, it is to prevent consumer confusion and fraud. The Office acknowledges these different motivating factors in its report and in its recommendations attempts to balance competing interests among them. But, there are still real areas of discontinuity—e.g., the basic structure of the right the Office proposes is intellectual-property-like. But it doesn’t really make a lot of sense to try to address some of the most pernicious fraudulent uses, such as deepfakes to manipulate public opinion, revenge porn, or scam phone calls, with a privately enforced property right oriented toward commercialization. Discovering and stopping those uses requires a very different approach and one that this particular proposal seems ill-equipped to deal with. 

Barely a few months ago, we were extremely skeptical that new federal legislation on digital replicas was a good idea. We’re still not entirely convinced, but the rash of new and proposed state laws does give us some pause. While the federal legislative process is fraught, it is also far from ideal for authors and creators to operate under a patchwork of varying state laws, especially those that provide little protection for expressive uses. Overall, we hope certain aspects of this report can positively influence the debate about existing federal proposals in Congress, but remain concerned about the lack of detail about protections for First Amendment rights. 

In the meantime, you can check out our two new resource pages on Generative AI and Personality Rights to get a better understanding of the issues.

What happens when your publisher licenses your work for AI training? 

Posted July 30, 2024
Photo by Andrew Neel on Unsplash

Over the last year, we’ve seen a number of major deals inked between companies like OpenAI and news publishers. In July 2023, OpenAI entered into a two-year deal with The Associated Press for ChatGPT to ingest the publisher’s news stories. In December 2023, Open AI announced its first non-US partnership to train ChatGPT on German publisher Axel Springer’s content, including Business Insider. This was then followed by a similar deal in March 2024 with Le Monde and Prisa Media, news publishers from France and Spain. These partnerships are likely sought in an effort to avoid litigation like the case OpenAI and Microsoft are currently defending from the New York Times.

As it turns out, such deals are not limited to OpenAI or newsrooms. Book publishers have also gotten into the mix. Numerous reports recently pointed out that based on Taylor and Francis’s parent company’s market update, the British academic publishing giant has agreed to a $10 million USD AI training deal with Microsoft. Earlier this year, another major academic publisher, John Wiley and Sons, recorded $23 million in one-time revenue from a similar deal with a non-disclosed tech company. Meta even considered buying Simon & Schuster or paying $10 per book to acquire its rights portfolio for AI training. 

With few exceptions (a notable one being Cambridge University Press), publishers have not bothered to ask their authors whether they approve of these agreements. 

Does AI training require licensing to begin with? 

First, it’s worth appreciating that these deals are made in the backdrop of some legal uncertainty. There are more than two dozen AI copyright lawsuits just in the United States, most of them turning on one key question: whether AI developers should have to obtain permission to scrap content to train AI models or whether fair use already allows this kind of training use even without permission. 

The arguments for and against fair use for AI training data are well explained elsewhere. We think there are strong arguments, based on cases like Authors Guild v. Google, Authors Guild v. HathiTrust, and AV ex rel Vanderhye v. iParadigms, that the courts will conclude that copying to training AI models is fair use. We also think there are really good policy reasons to think this could be a good outcome if we want to encourage future AI development that isn’t dominated by only the biggest tech giants and that results in systems that produce less biased outputs. But we won’t know for sure whether fair use covers any and all AI training until some of these cases are resolved. 

Even if you are firmly convinced that fair use protects this kind of use (and AI model developers have strong incentives to hold this belief), there are lots of other reasons why AI developers might seek licenses in order to navigate around the issue. This includes very practical reasons, like securing access to content in formats that make training easier, or content accompanied by structured, enhanced metadata. Given the pending litigation, licenses are also a good way to avoid costly litigation (copyright lawsuits are expensive, even if you win). 

Although one can hardly blame these tech companies for making a smart business decision to avoid potential litigation, this could have a larger systematic impact on other players in the field, including the academic researchers who would like to rely on fair use to train AI. As IP scholar James Gibson explains, when risk-averse users create new licensing markets in gray areas of copyright law, copyright holders’ exclusive rights expands, and public interest diminishes. The less we rely on fair use, the weaker it becomes.

Finally, it’s worth noting that fair use is only available in the US and a few other jurisdictions. In other jurisdictions, such as within the EU, using copyrighted materials for AI training (especially for commercial purposes) may require a license. 

To sum up: even though it may not be legally necessary to acquire copyright licenses for AI training, it seems that licensing deals between publishers and AI companies are highly likely to continue. 

So, can publishers just do this without asking authors? 

In a lot of cases, yes, publishers can license AI training rights without asking authors first. Many publishing contracts include a full and broad grant of rights–sometimes even a full transfer of copyright to the publisher for them to exploit those rights and to license the rights to third parties. For example, this typical academic publishing agreement provides that “The undersigned authors transfer all copyright ownership in and relating to the Work, in all forms and media, to the Proprietor in the event that the Work is published.” In such cases, when the publisher becomes the de facto copyright holder of a work, it’s difficult for authors to stake a copyright claim when their works are being sold to train AI.

Not all publishing contracts are so broad, however. For example, in the Model Publishing Contract for Digital Scholarship (which we have endorsed), the publisher’s sublicensing rights are limited and specifically defined, and profits resulting from any exploitation of a work must be shared with authors.  

There are lots of variations, and specific terms matter. Some publisher agreements are far more limited–transferring only limited publishing and subsidiary rights. These limitations in the past have prompted litigation over whether the publisher or the author gets to control rights for new technological uses. Results have been highly dependent on the specific contract language used. 

There are also instances where publishers aren’t even sure of what they own. For example, in the drawn-out copyright lawsuit brought by Cambridge University Press, Oxford University Press and Sage against Georgia State University, the court dropped 16 of the alleged 74 claimed instances of infringement because the publishers couldn’t produce documentation that they actually owned rights in the works they were suing over. This same lack of clarity contributed to the litigation and proposed settlement in the Google Books case, which is probably our closest analogy in terms of mass digitization and reuse of books (for a good discussion of these issues, see page 479 of this law review article by Pamela Samuelson about the Google Books settlement). 

This is further complicated by the fact that authors sometimes are entitled to reclaim their rights, such as by rights reversion clause and copyright termination. Just because a publisher can produce the documentation of a copyright assignment, does not necessarily mean that the publisher is still the current copyright holder of a work. 

We think it is certainly reasonable to be skeptical about the validity of blanket licensing schemes between large corporate rights holders and AI companies, at least when they are done at very large scale. Even though in some instances publishers do hold rights to license AI training, it is dubious whether they actually hold, and sufficiently document, all of the purported rights of all works being licensed for AI training.

Can authors at least insist on a cut of the profit? 

It can feel pretty bad to discover that massively profitable publishers are raking in yet more money by selling licensing rights to your work, while you’re cut out of the picture. If they’re making money, why not the author? 

It’s worth pointing out that, at least for academic authors, this isn’t exactly a novel situation–most academic authors make very little in royalties on their books, and nothing at all on their articles, while commercial publishers like Elsevier, Wiley, and SpringerNature sell subscription access at a healthy profit.  Unless you have retained sublicensing rights, or your publishing contract has a profit-sharing clause, authors, unfortunately, are not likely to profit from the budding licensing market for AI training.

So what are authors to do? 

We could probably start most posts like this with a big red banner that says “READ YOUR PUBLISHING CONTRACT!! (and negotiate it too)”  Be on the lookout for what you are authorizing your publisher to do with your rights, and any language in it about reuse or the sublicensing of subsidiary rights. 

You might also want to look for terms in your contract that speak to royalties and shares of licensing revenue. Some contracts have language that will allow you to demand an accounting of royalties; this may be an effective means of learning more about licensing deals associated with your work. 

You can also take a closer look at clauses that allow you to revert rights–many contracts will include a clause under which authors can regain rights when their book falls below a certain sales threshold or otherwise becomes “out of print.” Even without such clauses, it is reasonable for authors to negotiate a reversion of rights when their books are no longer generating revenue. Our resources on reversion will give you a more in-depth look at this issue.

Finally, you can voice your support for fair use in the context of licensing copyrighted works for AI training. We think fair use is especially important to preserve for non-commercial uses. For example, academic uses could be substantially stifled if paid-for licensing for permission to engage in AI research or related uses becomes the norm. And in those cases, the windfall publishers hope to pocket isn’t coming from some tech giant, but ultimately is at the expense of researchers, their libraries and universities, and the public funding that goes to support them.

Introducing Yuanxiao Xu, Authors Alliance’s New Staff Attorney

Posted July 23, 2024
Yuanxiao Xu, Authors Alliance Staff Attorney

By Dave Hansen

Today I’m pleased to introduce to the Authors Alliance community Yuanxiao Xu, who will be taking on the role of Authors Alliance’s Staff Attorney.  

Over the past few years, Authors Alliance has been more active than ever before advocating for the interests of authors before courts and administrative agencies. Our involvement has ranged from advocacy in high-profile cases such as Warhol Foundation v. Goldsmith and Hachette Books v. Internet Archive to less visible but important regulatory filings. For example, last December we filed a comment explaining the importance of federal agencies having the legal tools to promote open access to scholarly research outputs funded through grants.  On top of those advocacy efforts, we remain committed to helping authors navigate the law through our legal guides and other educational resources. The most recent significant addition is our practical guide titled Writing about Real People.

Our advocacy and educational work requires substantial legal, copyright, and publishing expertise. We’re therefore very fortunate to welcome Yuanxiao to the team to support our efforts. Yuanxiao joins us from previous legal roles with Creative Commons, the Dramatists Guild of America, and the University of Michigan Libraries. She has substantial experience advising academic authors and other creators on issues related to plagiarism, copyright infringement, fair use, licensing, and music copyright. She received her JD from the University of Michigan and is licensed to practice law in the State of New York.

As we grapple with difficult issues such as AI and authorship, ongoing publishing industry consolidation, and attacks on the integrity of institutions like libraries, I’m very excited to work with Yuanxiao to further develop and implement our legal strategy in a way that supports authors who care deeply about the public interest. 

I’m thrilled to join Authors Alliance to collaborate with our community and sister organizations and together advocate for a better copyright ecosystem for authors and creatives. I hope to strive for a future where the interests of creators and the public do not take a back seat to the profit-maximizing agenda of big entertainment companies and tech giants,” says Yuanxiao. 

We’re always pleased to hear from our members about ways that we might be able to help support their efforts to reach readers and have their voice heard. If you’d like to get in touch with Yuanxiao directly, you can reach her at xu@authorsalliance.org

Hachette v. Internet Archive Update: Oral Argument Before the Second Circuit Court of Appeals

This is a short update on the Hachette v. Internet Archive controlled digital lending lawsuit, which is currently pending on appeal before the Second Circuit Court of Appeals. The court held oral argument in the case today. [July 2 update: a recording of the hearing is available here.]

We’ve covered the background of this suit numerous times – it is in essence about whether it is permissible for libraries to digitize and lend books in their collections in a lend-like-print manner (e.g., only providing access to one user at a time based on the number of copies the library owns in print). 

At this point, both parties have fully briefed the court on their legal arguments, bolstered on both sides by numerous amicus briefs explaining the broader implications of the case for authors, publishers, libraries, and readers (you can find the full docket, including these briefs online here). 

Our amicus brief, which received a nice shout-out from Internet Archive’s counsel in oral argument today, was filed in support of the Internet Archive and controlled digital lending, argues that many authors benefit from CDL because it enhances access to their work, aids in preservation, and supports their efforts to research and build upon existing works to create new ones. 

What happened at oral argument

Compared to the District Court proceedings, this oral argument went much better for Internet Archive. Whether Internet Archive will prevail is another question, but it did seem to me the panel was genuinely trying to understand the basic rationale for CDL, whether there is a credible argument for distinguishing between CDL copies and licensed ebooks, and what kind of burden the plaintiff or defendant should bear in proving or disproving market harm. Overall, I felt the panel gave both sides a fair hearing and is interested in the broader implications of this case. 

A few highlights: 

  • It almost seemed that the panel assumed the district court got it wrong when it concluded that Internet Archive’s use was commercial in nature, rather than nonprofit (an important distinction in fair use cases). The district court adopted a novel approach, finding that IA’s connection with Better World Books and its solicitation of donations on webpages that employ CDL pushed it into the “commercial” category. The panel on appeal seemed skeptical, for example, commenting on how meager the $5000 was that Internet Archive actually made on the arrangement. Looking beyond controlled digital lending, this is an important issue for all nonprofit users, and I’m hopeful that the Second Circuit sees the importance of correcting the lower court on this point. 
  • At least some members of the panel seemed to appreciate the incongruity of a first sale doctrine that applies only to physical books but somehow not to digital lending. One particularly good question on this, directed to the publishers’ counsel, was about whether in the absence of section 109, library physical lending would be permissible as a fair use or otherwise. This was helpful, I think, because it stripped away the focus on the text of 109 and refocused the discussion on the underlying principles of exhaustion–i.e., what rights do libraries and other owners of copies get when they buy copies. 

There were also a few concerning exchanges: 

  • At one point, there was a line of questioning about whether fair use could override or provide for a broader scope of uses than what Congress had provided to libraries in Section 108 (the part of the copyright act that has very specific exceptions for things like libraries making preservation copies). Even the publishers’ lawyer wasn’t willing to argue that libraries’ rights are fully covered by Section 108 of the Copyright Act and that fair use didn’t apply–likely because of course that issue was addressed directly in Authors Guild HathiTrust, and she knew it–but it was a concerning exchange nonetheless.

I also came away with several questions:

  • Each member of the panel asked probing questions to both sides about the importance of market harm and, more specifically, what kind of proof is required to demonstrate market harm to the publishers. It was hard to tell which direction any were leaning on this–while there was some acknowledgment that there wasn’t really any hard evidence about the market effect, members of the panel also made several remarks about the logic of CDL copies replacing ebook sales as being common sense. 
  • The panel asked a number of questions about the role of fair use in responding to law new technology. Should fair use be employed to help smooth over bumps caused by new technology, or should courts be more conservative in its application in cases where Congress has chosen not to act?  Despite several questions about this issue, I came away with no clear read on what the panel thought might be the correct framework in a case like this.

It’s folly to predict, but I came away optimistic that the panel will correct many of the errors from the District Court below. 

Introducing the Authors Alliance’s First Zine: Can Authors Address AI Bias?

Posted May 31, 2024

This guest post was jointly authored by Mariah Johnson and Marcus Liou, student attorneys in Georgetown’s Intellectual Property and Information Policy (iPIP) Clinic.

Generative AI (GenAI) systems perpetuate biases, and authors can have a potent role in mitigating such biases.

But GenAI is generating controversy among authors. Can authors do anything to ensure that these systems promote progress rather than prevent it? Authors Alliance believes the answer is yes, and we worked with them to launch a new zine, Putting the AI in Fair Use: Authors’ Abilities to Promote Progress, that demonstrates how authors can share their works broadly to shape better AI systems. Drawing together Authors Alliance’s past blog posts and advocacy discussing GenAI, copyright law, and authors, this zine emphasizes how authors can help prevent AI bias and protect “the widest possible access to information of all kinds.” 

As former Copyright Register Barbara Ringer articulated, protecting that access requires striking a balance with “induc[cing] authors and artists to create and disseminate original works, and to reward them for their contributions to society.” The fair use doctrine is often invoked to do that work. Fair use is a multi-factor standard that allows limited use of copyrighted material—even without authors’ credit, consent, or compensation–that asks courts to examine:

(1) the purpose and character of the use, 

(2) the nature of the copyrighted work, 

(3) the amount or substantiality of the portion used, and 

(4) the effect of the use on the potential market for or value of the work. 

While courts have not decided whether using copyrighted works as training data for GenAI is fair use, past fair use decisions involving algorithms, such as Perfect 10, iParadigms, Google Books, and HathiTrust favored the consentless use of other people’s copyrighted works to create novel computational systems. In those cases, judges repeatedly found that algorithmic technologies aligned with the Constitutional justification for copyright law: promoting progress.

But some GenAI outputs prevent progress by projecting biases. GenAI outputs are biased in part because they use biased, low friction data (BLFD) as training data, like content scraped from the public internet. Examples of BLFD include Creative Commons (CC) licensed works, like Wikipedia, and works in the public domain. While Wikipedia is used as training data in most AI systems, its articles are overwhelmingly written by men–and that bias is reflected in shorter and fewer articles about women. And because the public domain cuts off in the mid-1920s, those works often reflect the harmful gender and racial biases of that time. However, if authors allow their copyrighted works to be used as GenAI training data, those authors can help mitigate some of the biases embedded in BLFD. 

Current biases in GenAI are disturbing. As we discuss in our zine, word2vec is a very popular toolkit used to help machine learning (ML) models recognize relationships between words–like women as homemakers and Black men with the word “assaulted.” Similarly, OpenAI’s GenAI chatbox ChatGPT, when asked to generate letters of recommendation, used “expert,” “reputable,” and “authentic” to describe men and  “beauty,” “stunning,” and “emotional” for women, discounting women’s competency and reinforcing harmful stereotypes about working women. An intersectional perspective can help authors see the compounding impact of these harms. What began as a legal framework to describe why discrimination law did not adequately address harms facing Black women, it is now used as a wider lens to consider how marginalization affects all people with multiple identities. Coined by Professor Kimberlé Crenshaw in the late 1980s, intersectionality uses critical theory like Critical Race Theory, feminism, and working-class studies together as “a lens . . . for seeing the way in which various forms of inequality often operate together and exacerbate each other.” Contemporary authors’ copyrighted works often reflect the richness of intersectional perspectives, and using those works as training data can help mitigate GenAI bias against marginalized people by introducing diverse narratives and inclusive language. Not always–even recent works reflect bias–but more often than might be possible currently.

Which brings us back to fair use. Some corporations may rely on the doctrine to include more works by or about marginalized people in an attempt to mitigate GenAI bias. Professor Mark Lemley and Bryan Casey have suggested “[t]he solution [to facial recognition bias] is to build bigger databases overall or to ‘oversample’ members of smaller groups” because “simply restricting access to more data is not a viable solution.” Similarly, Professor Matthew Sag notes that “[r]estricting the training data for LLMs to public domain and open license material would tend to encode the perspectives, interests, and biases of a distinctly unrepresentative set of authors.” However, many marginalized people may wish to be excluded from these databases rather than have their works or stories become grist for the mill. As Dr. Anna Lauren Hoffman warns, “[I]nclusion reinforces the structural sources of violence it supposedly addresses.”

Legally, if not ethically, fair use may moot the point. The doctrine is flexible, fact-dependent, and fraught. It’s also fairly predictable, which is why legal precedent and empirical work have led many legal scholars to believe that using copyrighted works as training data to debias AI will be fair use–even if that has some public harms. Back in 2017, Professor Ben Sobel concluded that “[i]f engineers made unauthorized use of copyrighted data for the sole purpose of debiasing an expressive program, . . . fair use would excuse it.” Professor Amanda Levendowski has explained why and how “[f]air use can, quite literally, promote creation of fairer AI systems.” More recently, Dr. Mehtab Khan and Dr. Alex Hanna  observed that “[a]ccessing copyright work may also be necessary for the purpose of auditing, testing, and mitigating bias in datasets . . . [and] it may be useful to rely on the flexibility of fair use, and support access for researchers and auditors.” 

No matter how you feel about it, fair use is not the end of the story. It is ill-equipped to solve the troubling growth of AI-powered deepfakes. After being targeted by sexualized deepfakes, Rep. Ocasio-Cortez described “[d]eepfakes [as] absolutely a way of digitizing violent humiliation against other people.” Fair use will not solve the intersectional harms of AI-powered face surveillance either. Dr. Joy Buolamwini and Dr. Timnit Gebru evaluated leading gender classifiers used to train face surveillance technologies and discovered that they more accurately classified males over females and lighter-skinned over darker-skinned people. The researchers also discovered that the “classifiers performed worst on darker female subjects.” While legal scholars like Professors Shyamkrishna Balganesh, Margaret Chon, and Cathay Smith argue that copyright law can protect privacy interests, like the ones threatened by deepfakes or face surveillance, federal privacy laws are a more permanent, comprehensive way to address these problems.

But who has time to wait on courts and Congress? Right now, authors can take proactive steps to ensure that their works promote progress rather than prevent it. Check out the Authors Alliance’s guides to Contract Negotiations, Open Access, Rights Reversion, and Termination of Transfer to learn how–or explore our new zine, Putting the AI in Fair Use: Authors’ Abilities to Promote Progress.

You can find a PDF of the Zine here, as well as printer-ready copies here and here.

Book Talk: Attack from Within by Barbara McQuade

This event is canceled due to a scheduling issue. We will repost when it is rescheduled.

Join us for a VIRTUAL book talk with legal scholar BARBARA McQUADE on her New York Times bestseller, ATTACK FROM WITHIN, about disinformation’s impact on democracy. NYU professor and author CHARLTON McILWAIN will facilitate our discussion.

REGISTER NOW

“A comprehensive guide to the dynamics of disinformation and a necessary call to the ethical commitment to truth that all democracies require.”

Timothy Snyder, author of the New York Times bestseller On Tyranny

American society is more polarized than ever before. We are strategically being pushed apart by disinformation—the deliberate spreading of lies disguised as truth—and it comes at us from all sides: opportunists on the far right, Russian misinformed social media influencers, among others. It’s endangering our democracy and causing havoc in our electoral system, schools, hospitals, workplaces, and in our Capitol. Advances in technology including rapid developments in artificial intelligence threaten to make the problems even worse by amplifying false claims and manufacturing credibility.

In Attack from Within, legal scholar and analyst Barbara McQuade, shows us how to identify the ways disinformation is seeping into all facets of our society and how we can fight against it. The book includes:

  • The authoritarian playbook: a brief history of disinformation from Mussolini and Hitler to Bolsonaro and Trump, chronicles the ways in which authoritarians have used disinformation to seize and retain power.
  • Disinformation tactics—like demonizing the other, seducing with nostalgia, silencing critics, muzzling the media, condemning the courts; stoking violence—and why they work.
  • An explanation of why America is particularly vulnerable to disinformation and how it exploits our First Amendment Freedoms, sparks threats and violence, and destabilizes social structures.
  • Real, accessible solutions for countering disinformation and maintaining the rule of law such as making domestic terrorism a federal crime, increasing media literacy in schools, criminalizing doxxing, and much more.

Disinformation is designed to evoke a strong emotional response to push us toward more extreme views, unable to find common ground with others. The false claims that led to the breathtaking attack on our Capitol in 2021 may have been only a dress rehearsal. Attack from Within shows us how to prevent it from happening again, thus preserving our country’s hard-won democracy.

ABOUT OUR SPEAKERS

BARBARA McQUADE is a professor at the University of Michigan Law School, where she teaches criminal law and national security law. She is also a legal analyst for NBC News and MSNBC. From 2010 to 2017, McQuade served as the U.S Attorney for the Eastern District of Michigan. She was appointed by President Barack Obama, and was the first woman to serve in her position. McQuade also served as vice chair of the Attorney General’s Advisory Committee and co-chaired its Terrorism and National Security Subcommittee.

Before her appointment as U.S. Attorney, McQuade served as an Assistant U.S. Attorney in Detroit for 12 years, including service as Deputy Chief of the National Security Unit. In that role, she prosecuted cases involving terrorism financing, foreign agents, threats, and export violations. McQuade serves on a number of non-profit boards, and served on the Biden-Harris Transition Team in 2020-2021. She has been recognized by The Detroit News with the Michiganian of the Year Award, the Detroit Free Press with the Neal Shine Award for Exemplary Regional Leadership, Crain’s Detroit Business as a Newsmaker of the Year and one of Detroit’s Most Influential Women, and the Detroit Branch NAACP and Arab American Civil Rights League with their Tribute to Justice Award. McQuade is a graduate of the University of Michigan and its law school. She and her husband live in Ann Arbor, Michigan, and have four children.s an assistant professor of English at Emory University with a courtesy appointment in quantitative theory and methods. He is the author of American Literature and the Long Downturn: Neoliberal Apocalypse (2020). His writing has appeared in the New York Times, the Washington Post, the Los Angeles Review of BooksThe RumpusDissent, and other publications.

CHARLTON McILWAIN
Author of the recent book, Black Software: The Internet & Racial Justice, From the Afronet to Black Lives Matter, Dr. Charlton McIlwain is Vice Provost for Faculty Development, Pathways & Public Interest Technology at New York University, where he is also Professor of Media, Culture, and Communication at NYU Steinhardt. He works at the intersections of computing technology, race, inequality, and racial justice activism. He has served as an expert witness in landmark U.S. Federal Court cases on reverse redlining/racial targeting in mortgage lending and recently testified before the U.S. House Committee on Financial Services about the impacts of automation and artificial intelligence on the financial services sector. He is the author of the recent PolicyLink report Algorithmic Discrimination: A Framework and Approach to Auditing & Measuring the Impact of Race-Targeted Digital Advertising. He writes regularly for outlets such as The Guardian, Slate’s Future Tense, MIT Technology Review and other outlets about the intersection of race and technology. McIlwain is the founder of the Center for Critical Race & Digital Studies, and is Board President at Data & Society Research Institute. He leads NYU’s Alliance for Public Interest Technology, is NYU’s Designee to the Public Interest Technology University Network, and serves on the executive committee as co-chair of the ethics panel for the International Panel on the Information Environment.

Book Talk: Attack from Within by Barbara McQuade
Thursday, June 6 @ 10am PT / 1pm ET
Register now for the virtual event!