
Happy Fair Use Week! I’m not typically a last word type of person, but I agreed to take on the fourth factor (for reasons that hopefully will be clearer below), so here we are. I hope you all have had a wonderful and informative Fair Use Week. Please check out our series of posts marking the 50th anniversary of the Copyright Act of 1976, as well as a variety of other Fair Use Week blogs and events around the country.
Markets Markets Markets (Value!)
I’m here to talk about Factor Four of Section 107, which reads: “In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include… (4) the effect of the use upon the potential market for or value of the copyrighted work.”
Since I’m currently the AI Legal Fellow at the Authors Alliance, I’m going to talk about factor four within the context of ongoing AI litigation, and some concerns we have with where factor four jurisprudence could go, if some pernicious ideas that have not yet gained traction (thankfully!) begin to take hold.
Part of the reason we’re so concerned about fair use being shifted by AI litigation is that, in most ways, we’re deeply appreciative of fair use as it presently exists and has been shaped over the years. Below are two values and insights into factor four that I think should be particularly celebrated and held up as examples long into the future. I intend to match these values with two recent developments in AI/copyright law that aim to redraw the contours of the law and might threaten to undermine those values.
Markets aren’t everything. A work’s market could be destroyed and that can still be ok under fair use. Market garroting ≠ not fair use!
Fair use’s fourth factor recognition of markets is not meant to erect castle walls around the markets for all existing works. Instead, it forms part of a flexible standard that was built to withstand (and facilitate) rapid change, which it has done quite effectively for the past 50 years. Copyright’s constitutional purpose is to promote progress and creative expression, not to shield authors from unfavorable comment, market backlash, or changing audience preferences. The Supreme Court has been clear that even total market evisceration does not negate fair use:
“We do not, of course, suggest that a parody may not harm the market at all, but when a lethal parody, like a scathing theater review, kills demand for the original, it does not produce a harm cognizable under the Copyright Act. Because “parody may quite legitimately aim at garroting the original, destroying it commercially as well as artistically,” the role of the courts is to distinguish between “biting criticism that merely suppresses demand and copyright infringement, which usurps it.” (Campbell v. Acuff-Rose Music, 510 U.S. 569, 591-592 (1994) (internal citations omitted).
The above quote is in the context of parody, but the Supreme Court is also talking about our collective values (parody and criticism historically being core values in this country) and how our values shall not yield to a short term impulse to preserve existing markets.
When we think about AI and its potential effect on markets, we must continue to remember that adverse market effects cannot be a death sentence for fair use. Protecting markets at all costs is a tradeoff we haven’t historically made; we should be very careful not to drift toward a view that the value or market for a given work is somehow sacrosanct. Competition, even automated competition, is not the same thing as cognizable market substitution.
The fourth factor has a limited reach; courts must be careful to only protect markets that are “traditional, reasonable, or likely to be developed.” (Am. Geophysical Union v. Texaco,Inc., 60 F.3d 913, 930 (2d Cir. 1994))
Courts have been and must continue to be vigilant to avoid fabricating markets where none properly exist. Think about a wide spectrum of uses that you might engage in, where there is likely some subset of rightsholders who would be ready and willing to create a market for that use. Micropayments for every quote you include in your academic papers? A fraction of a penny for every thumbnail image you look at in a search result? An annual licensing fee that permits you to take photographs of artwork in public or practice sketching in an art museum? We live in a world where fair use acts as a backstop for so much that we take for granted – a world of activities that rightsholders dare not touch because, on some level, we all intuitively know that those markets would not be traditional, nor reasonable.
Indeed, courts routinely reject attempts to recognize these types of phantom licensing markets during litigation. For example, in Authors Guild v. Google, Inc., the Second Circuit firmly rejected attempts to extract tolls for digital book searches, recognizing that Google Books’ “snippet view” was a transformative discovery tool that augmented public knowledge without providing a substitute for the books themselves.
We see such sensible limitations in the art world as well: in Blanch v. Koons, the Second Circuit dismissed a photographer’s claim of a lost derivative market for an image incorporated into a collage, noting that Andrea Blanch had never actually licensed her work for derivative artistic uses (“Blanch acknowledges that she has not published or licensed ‘‘Silk Sandals’’ subsequent to its appearance in Allure, that she has never licensed any of her photographs for use in works of graphic or other visual art, that Koons’s use of her photograph did not cause any harm to her career or upset any plans she had for ‘‘Silk Sandals’’ or any other photograph, and that the value of ‘‘Silk Sandals’’ did not decrease as the result of Koons’s alleged infringement.”)
Enter the theory of “Market Dilution”
Here’s where AI and copyright begin to complicate the values we have historically held near and dear. Remember when we said that legitimate fair uses might well threaten or alter the markets for works and that isn’t reason alone to find that factor four weighs against fair use? Well, there’s a species of new thinking about this topic, a concept called “Market Dilution” that has emerged. We haven’t seen it gain much traction and we think it’s the wrong way to think about the fourth factor, but it is an argument being used by those who are more concerned with preserving markets and want to stretch the reach of the fourth factor.
We’ve seen the market dilution theory in Kadrey v. Meta, where Judge Chhabria wrote: “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service, and they present no evidence about how the current or expected outputs from Meta’s models would dilute the market for their own works.” (Kadrey v. Meta opinion at 4). The strong implication is that, with a more developed evidentiary showing, such a theory might receive more serious consideration.
“This case…involves a technology that can generate literally millions of secondary works, with a miniscule fraction of the time and creativity used to create the original works it was trained on. No other use—whether it’s the creation of a single secondary work or the creation of other digital tools—has anything near the potential to flood the market with competing works the way that LLM training does. And so the concept of market dilution becomes highly relevant.” (Kadrey v. Meta opinion at 32)
Judge Chhabria is not alone in amplifying this market dilution theory. The US Copyright Office also appears to think that it is a form of harm that should be cognizable under the fourth factor: “While we acknowledge this is uncharted territory, in the Office’s view, the fourth factor should not be read so narrowly. The statute on its face encompasses any “effect” upon the potential market. The speed and scale at which AI systems generate content pose a serious risk of diluting markets for works of the same kind as in their training data. That means more competition for sales of an author’s works and more difficulty for audiences in finding them. If thousands of AI-generated romance novels are put on the market, fewer of the human-authored romance novels that the AI was trained on are likely to be sold.” (USCO Copyright and Artificial Intelligence Report, Part 3, page 65) (internal citations omitted)
If market dilution theory gains traction, we risk swallowing up lots of uses that were previously fair. For now, let’s put a pin in this market dilution theory and discuss a second troubling dimension of current AI/copyright/fourth factor litigation.
We live in an increasingly sophisticated panopticon
As I noted earlier, there have historically been lots of places where we experiment, create, and use the works of others in a variety of ways where rightsholders dare not tread, because there is an almost intuitive sense that cognizable market boundaries exist for a reason and there are many uses that should never be subject to licenses or markets.
How we doodle, how we ideate, how we process information in order to later create works (yes, sometimes with commercial intentions, sometimes in ways that impact legitimate markets) has quite often been an untouched territory.
Again, think about the sketches of in-copyright artworks you may have created as a child or the fan fiction you may have enjoyed, or the times you used a well known song in a home movie for your friends and family. Many of those fair uses existed in personal space, fairly insulated from market analysis.
Today, with great advances in technology come new methods of surveillance and very tempting opportunities to close down or warp those individual, creative, and personal spaces.
On January 5, 2026, in a case involving questions of fair use in AI, Judge Sidney Stein, in a thinly reasoned order, upheld Magistrate Judge Wang’s order to compel OpenAI to produce 20 million ChatGPT chat logs, in part because “output logs that do not contain reproductions of News Plaintiffs’ works may still be relevant to OpenAI’s fair use defense.”
20 million chat logs! This is controversial for a variety of reasons, but a salient one for fourth factor analysis and fair use purposes is that AI chat logs will typically not reveal the information necessary to determine actual market harm.
A chat log can show you a prompt and an output but often will not reliably tell you:
- whether the user copied, distributed, published, or monetized anything;
- whether the use was transformative in purpose (e.g., critique, comparison, research, parody);
- whether an output was a quotation for commentary,
- whether an output was requested for a classroom discussion,
A chat transcript can show what a user typed and what the system returned. It almost never shows what happened next. Did the user publish the output? Share it? Sell it? Substitute it for a subscription or license? Already have lawful access to the source material? Use the output for commentary, research, parody, or some other lawful use? Those are the kinds of facts that would help to determine actual market harm. Yet they generally do not exist in a chat log.
What courts may receive instead is a record of private cognitive activity. Drafting, curiosity, and experimentation stripped of downstream context. If those records are then used to construct speculative market harm, factor four risks untethering itself from its doctrinal core: actual or likely substitution in traditional and reasonable markets.
Market dilution and mass surveillance could be a toxic mix
I’ve highlighted market dilution and large-scale inquiry into individual uses of AI to spotlight two areas where we may be on the cusp of significant change in fourth-factor analysis.
What worries me is a version of the future where intrusion into personal creative spaces becomes normalized. Where we protect works in ways that attempt to protect humans from machines but do so by sacrificing the ways humans have traditionally consumed, studied, remixed, and learned from works.
Some might argue that we can simply move these activities offline or out of the emerging AI space. But that misunderstands the concern. As I stated at the beginning of this post, we grant rights to rightsholders subject to a host of rights we reserve to ourselves. We reserve and should continue to reserve the freedom to experiment, to test technologies, to rework and analyze the art of others (sometimes commercially, often not) in ways that make culture dynamic and knowledge cumulative.
It would be a diminished world if we quietly surrendered those rights through doctrinal drift and expansive discovery in intrusive litigation, to an ever-watching apparatus that finds new ways to meter, monitor, and monetize the ordinary ways we learn, play, and create.
Let’s not do that. Happy Fair Use Week!
Discover more from Authors Alliance
Subscribe to get the latest posts sent to your email.
