Copyright Office Hosts Listening Session on Copyright in AI-Generated Audiovisual Works

Posted June 26, 2023
Photo by Jon Tyson on Unsplash

On May 17, the Copyright Office held a listening session on the topic of copyright issues in AI-generated audiovisual works. You may remember that we’ve covered the other listening sessions convened by the Office on visual arts, musical works, and textual works (in which we also participated). In today’s post, we’ll summarize and discuss the audiovisual works listening session and offer some insights on the conversation.

Participants in the audiovisual works listening session included AI developers in the audiovisual space such as Roblox and Hidden Door; trade groups and professional organizations including the Motion Picture Association, Writers Guild of America West, and National Association of Broadcasters; and individual filmmakers and game developers. 

Generative AI Tools in Films and Video Games

As was the case in the music listening session, multiple participants indicated that generative AI is already being used in film production. The representative from the Motion Picture Association (MPA) explained that “innovative studios” are already using generative AI in both the production and post-production processes. As with other creative industries, generative AI tools can support filmmakers by increasing the efficiency of various tasks that are part of the filmmaking process. For example, routine tasks like color correction and blurring or sharpening particular frames are made much simpler and quicker through the use of AI tools. Other participants discussed the ways in which generative AI can help with ideation, overcoming “creativity blocks,” eliminating some of the drudgery of filmmaking, enhancing visual effects, and lowering barriers to entry for would-be filmmakers without the resources of more established players. These examples are analogous to the various ways that generative AI can support authors, which Authors Alliance and others have discussed, like brainstorming, developing characters, and generating ideas for new works.

The representative from the MPA also emphasized the potential for AI tools to “enhance the viewer experience” by making visual effects more dramatic, and in the longer term, possibly enable much deeper experiences like having conversations with fictional characters from films. The representative from Hidden Door—a company that builds “online social role-playing games for groups of people to come together and tell stories together”—similarly spoke of new ways for audiences to engage with creators, such as by creating a sort of fan fiction world with the use of generative AI tools, with contributions from the author, the user, and the generative AI system. And in fact, this can create “new economic opportunities” for authors, who can monetize their content in new and innovative ways. 

Video games are similarly already incorporating generative AI. In fact, generative AI’s antecedents, such as “procedural content generation” and “rule-based systems” have been used in video games since their inception. 

Centering Human Creators

Throughout the listening session, participants emphasized the role of human filmmakers and game developers in creating works involving AI-generated elements, stating or implying that creators who use generative AI should own copyrights in the works they produce using these tools. The representative from Roblox, an online gaming platform that allows users to program games and play other users’ games, emphasized that AI-generated content is effective and engaging because of the human creativity inherent in “select[ing] the best output” and making other creative decisions. A representative from Inworld AI, a developer platform for AI characters, echoed this idea, explaining that these tools do not exist in isolation, but are productive only when a human uses them and makes creative choices about their use, akin to the use of a simpler tool like a camera or paintbrush. 

A concern expressed by several participants—including the Writers Guild of America West, National Broadcasters Association, and Directors Guild—is that works created using generative AI tools could devalue works created by humans without such tools. The idea of markets being “oversaturated” with competing audiovisual works raises the possibility that individual creators could be crowded out. While this is far from certain, it reflects increasing concerns over threats to creators’ economic livelihoods when AI-generated works compete alongside theirs. 

Training Data and Fair Use

On the question of whether using copyrighted training materials to train generative AI systems is a fair use, there was disagreement among participants. The representative from the Presentation Guild likened the use of copyrighted training data without permission to “entire works . . . being stolen outright.” They further said that fair use does not allow this type of use due to the commercial nature of the generative AI companies, the creative nature of the works used to train the systems (though it is worth noting that factual works, and others entitled only “thin” copyright protection, are also use to train these tools), and because by “wrest[ing] from the creator ownership and control of their own work[,]” the market value for those works is harmed . This is not, in my view, an accurate statement of how the market effects factor in fair use works, because unauthorized uses that are also fair always wrest some control from the author—this is part of copyright’s balance between an author’s rights and permitting onward fair uses. 

The representative from the Writers Guild of America (“WGA”) West—which is currently on strike over, among other things, the role of generative AI in television writing—had some interesting things to say about the use of copyrighted works as training data for generative AI systems. In contract negotiations, WGA had proposed a contract which “would . . . prohibit companies from using material written under the Guild’s agreement to train AI programs for the purpose of creating other derivative and potentially infringing works.” The companies refused to acquiesce, arguing that “the technology is new and they’re not inclined to limit their ability to use this new technology in the future.” The companies’ positions are somewhat similar to those expressed by us and many others—that while generative AI remains in its nascency, it is sensible to allow it to continue to develop before instituting new laws and regulations. But it does show the tension between this idea and creators who feel that their livelihoods may be threatened by generative AI’s potential to create works with less input from human authors. 

Other participants, such as the representative from Storyblock, a stock video licensing company, emphasized their belief that creators of the works used to train generative AI tools should be required to consent, and should receive compensation and credit for the use of their works to train these models. The so-called “three C’s” idea has gained traction in recent months. In my view, the use of training data is a fair use, making these requirements unnecessary from a copyright perspective, but it represents an increasingly prevailing view among rightsholders and licensing groups (including the WGA, motivating its ongoing strike in some respects) when it comes to making the use of generative AI tools more ethical. 

Adequacy of Registration Guidance

Several participants expressed concerns about the Copyright Office’s recent registration guidance regarding works containing AI-generated materials, and specifically how to draw the line between human-authored and AI-generated works when generative AI tools are used as part of a human’s creative process. The MPA representative explained that the guidance does not account for the subtle ways in which generative AI tools are used as part of the filmmaking process, where it often works as a component of various editing and production tools. The MPA representative argued that using these kinds of tools shouldn’t make parts of films unprotected by copyright or trigger a need to disclose minor uses of such tools in copyright registration applications. The representative from Roblox echoed these concerns, noting that when a video game involves thousands of lines of code, it would be difficult for a developer to disclaim copyright in certain lines of code that were AI-generated. 

A game developer and professor expressed her view that in the realm of video games, we are approaching a reality where generative AI is “so integrated into a lot of consumer-grade tools that people are going to find it impossible to be able to disclose AI usage.” If creators or users do not even realize they are using generative AI when they use various creative digital tools, the Office’s requirement that registrant’s disclose their use of generative AI in copyright registration applications will be difficult if not impossible to follow.