Categories
Artificial Intelligence Copyright Uncategorized

Authors are Humans and Creativity is a Function of Humanness: What the Mannion Court Can Teach Us About Generative AI’s Relationship to Authorship

By Molly Stech*

* The blog post below and the law review article it links to are the individual thoughts and views of the author and should not be attributed to any entity with which she is currently or has been affiliated.

silver copyright symbolDespite a recent decision from the Beijing Internet Court, there is growing consensus that artificial intelligence (AI) can be used as a tool, but that a human author must have ideated a copyrighted work and that the resultant creative work is the outcome of that person’s intellect and personality. Despite some international convergence on this issue, however, it is worth reviewing the backdrop of this issue and uncovering some of the more vexing practicalities regarding the level of creative autonomy a person must exercise to receive a copyright registration. The threshold for creativity in copyright law is low across jurisdictions, but how low is it? Are fifty binary choices enough to confer authorship? Are 624 generative AI prompts enough? Similar to other areas of copyright, such as the idea-expression dichotomy, or the unpredictability of the U.S. fair use doctrine, there are almost no bright lines to be drawn in the context of AI.

In a forthcoming paper, I review the law and the jurisprudential landscape on AI “authorship,” as well as academic commentary on the topic, and conclude that the bedrock principles of copyright law would not be served by permitting an acknowledgment of an AI system or algorithm as an author. Although AI is a groundbreaking, even revolutionary, technology, the ways in which it challenges the traditional contours of copyright law are not entirely new. We know from the New York Bridgeman decision in 1999 that skill and labor – and even creativity – in the production stage of creativity are meaningless unless the output (or in copyright parlance, the “work”) exhibit creativity. The Court of Justice of the European Union (CJEU) and United States courts point to an “author’s own intellectual creation” and a “modicum of creativity,” respectively, in ascertaining whether something merits copyright protection. As the European Union implements the world’s first AI Act, and as the U.S. Copyright Office reviews applications for AI-assisted works, underscoring the importance of human authorship is paramount to ensuring laws and courts are well-equipped with the rationale underlying the important distinction between human creativity and machine-generated outputs.

Human authorship has always been, and continues to be, a foundational requirement for copyright protection to subsist in a work. AI challenges this prerequisite but does not overcome it. The output of generative AI is not discernibly different from the output of a human author and therefore benefits from a false sheen of originality. While some argue that prompt engineering fulfills the requirements of originality––as noted above, the threshold for originality is quite low across jurisdictions––prompting still lacks the requisite link between human creativity and the resulting work to receive copyright protection. International copyright treaties and domestic copyright law must be interpreted as aiming to provide copyright’s exclusive rights to works that reflect human originality and that reward human beings. A 2006 New York district court case outlined three means by which photographs can demonstrate originality: rendition, timing, and creation of the subject. My paper proposes that each of these mechanisms, understood through the prism of generative AI, remains applicable for analyzing whether human originality subsists in a given work. Originality exists along a sliding scale, resulting in a mix of thin copyrights and medium copyrights and thick copyrights. While it may not always be the case as the technology evolves, the current relationship between generative AI and its user results in outputs that are generally too detached from the user’s creativity to satisfy the requirements of copyrightable authorship. Generative AI remixes the content on which it has been trained according to its algorithm and prompts. Copyright protection is a privilege and it can only be earned by humans by way of their own intellectual creations.

Categories
Artificial Intelligence Conferences Copyright Fair Use

Panel 5A: Generative AI & Human Authorship (C-IP2 2023 Annual Fall Conference)

The following post comes from Jake L. Bryant, a student in the Intellectual Property Law LL.M. program at Scalia Law and a Research Assistant at C-IP2.

silver copyright symbolOn October 12th and 13th, the Center for Intellectual Property x Innovation Policy (C-IP2) hosted its 2023 Annual Fall Conference, this year titled First Sale: The Role of IP Rights in Markets. One topic that attracted significant attention was the role of copyright law in generative artificial intelligence. A discussion on Generative AI & Human Authorship, was highlighted in one of the key copyright panels of the event. The discussion included a number of distinguished speakers: John Tehranian, the Paul W. Wildman Chair and Professor of Law at Southwestern Law School; Van Lindberg, a partner at Taylor English Duma LLP specializing in IP law; Molly Torsen Stech, General Counsel for the International Association of Scientific, Technical, and Medical Publishers and an adjunct professor at American University School of Law; and Keith Kupferschmid, CEO of the Copyright Alliance. The panel was moderated by Sandra Aistars, a professor at the Antonin Scalia Law School at George Mason University and the Senior Fellow for Copyright Research & Policy at C-IP2. Speakers addressed how  copyright law fits with generative AI technology.

According to Tehranian, the copyright issues raised by generative AI are not new but are based on law that has been developing for decades, if not centuries. Notably, the Copyright Act of 1976 does not define the word “author.” Cases like the Ninth Circuit’s Naruto v. Slater (2018) and the D.C. District Court’s Thaler v. Perlmutter (2023), as well as guidelines from the Copyright Office have each analogized to earlier case law to hold that only human beings can be authors for copyright purposes. Nevertheless. answering the question of whether human AI developers and prompt engineers can be authors of the outputs of generative AI models is an open question in determining AI’s place within copyright law.

Approaches vary in shaping AI’s place in copyright jurisprudence, and, as the panelists acknowledged, no definitive right answer has been established. Generative AI has seen IP scholars and practitioners return to the old forge of jurisprudence, one where the exchange of opposing ideas sharpens the tools necessary to develop a viable solution for protecting the rights of all copyright interests involved. Protection of creative expression and room for innovation in copyright was the guiding star for each panelist, addressing the rights of AI developers, existing copyright owners, and any rights to be found for users of AI systems. As Tehranian stated, one should not be quick to deem existing copyright law and its protections inadequate for new technologies. Among other interests, the discussion addressed the importance of hearing the voices of the creators whose rights would be affected by new developments. Touching on seminal cases like Burrow-Giles Lithographic Co. v. Sarony (1884) and Andy Warhol Foundation for the Visual Arts v. Goldsmith (2023), the panelists discussed a host of issues, including the role of authorship related to photographers and prompt engineers, subject rights in photographs and other visual works, and the application of the fair use doctrine to the use of copyrightable works in training AI models.

Kupferschmid discussed the ingestion process in training artificial intelligence and the effects on different industries, staking out five key principles. First, he stated that the rights of creatives and copyright owners must be respected in formulating new legislation. Second, longstanding copyright laws must not be cast aside to subsidize new AI technologies. Third, the ingestion of copyrighted works by AI systems implicates the right of reproduction described in 17 U.S.C. § 106. Fourth, Kupferschmid argued that the ingestion of copyrighted materials is not categorically fair use. Rather, he contended that fair use analysis requires a fact-intensive inquiry and will likely show that ingestion by AI is rarely fair use. Finally, he posited that AI developers must obtain a license from copyright owners of works used to train their models. Kupferschmid also asserted that the ability of copyright owners to license their works to AI developers is a market that would be usurped by deeming AI ingestion a fair use.

Lindberg also acknowledged that fair use analysis requires a fact-intensive inquiry but contended that the ingestion of copyrighted works in training AI systems is likely to be and should be considered a fair use. While a copy is created in the ingestion of a work by an AI, Lindberg analogized the training process of AI systems to a hypothetical where a person takes a book and creates a statistical table calculating the number of nouns, verbs, adjectives, and other parts of speech and the probability of their ordering. He claimed that this is both transformative and outside the scope of the copyright owner’s market. Lindberg likewise suggested that, in most cases, there is no translation from any specific ingested material to the outputs generated by a given prompt. Thus, there is no likelihood of substantial similarity between works ingested and outputs created by using an AI system. Kupferschmid replied that Lindberg’s description of the data used in training the AI is the essence of copyrightable expression—the words chosen by the author, and the order in which they are placed. That an AI system translates this function into computer code makes it no less protectable expression than if a human were to translate an author’s protected work from English into French. Lindberg partially conceded the point but contended that any substantial similarity that resulted on outputs would occur as a result of overtraining or overfitting AI models  a result that most proponents of generative AI do not seek to encourage and one that he conceded is unlikely to fall within the scope of fair use. The panelists cited the Books3 data set, which has been used to train various large language AI models, as an example of a problematic example of training sets that could result in a variety of undesirable outcomes.

Tehranian agreed with Lindberg, stating that existing precedent could deem AI training a fair use. Acknowledging that the recent Supreme Court case Andy Warhol Foundation for the Visual Arts v. Goldsmith cut back on the weight afforded to certain transformative uses in fair use determinations, he distinguished that the Court did not reduce the weight of trans-purpose uses, where the copyrighted material is not used to create a new work but instead used for a purpose beyond the scope of an author’s market. While Tehranian stated that he did not necessarily agree that ingestion during AI training should be fair use, he concluded that the existing law creates a likelihood that it will be so.

The panel also discussed the NO FAKES Act, introduced that week by senators from both major parties. See Chris Coons et al., Draft Copy of the NO FAKES Act of 2023, Chris Coons (Nov. 28, 2023), https://www.coons.senate.gov/imo/media/doc/no_fakes_act_draft_text.pdf. Tehranian noted that this proposed legislation would  help protect against unauthorized uses of a person’s name, image, or likeness by creating a federal right of publicity, explaining that federal trademark law and state rights of publicity are currently inadequately equipped to handle these issues clearly and consistently.

Stech agreed with each of the five points described by Kupferschmid. Specifically, she argued that the quality of data ingested by AI weighs against a finding of fair use. She also argued in favor of granting copyright over images to the subjects of photographs. She stated that “there are two humans contributing creativity in a photograph,” and that photographers may not be the only authors of photographs including a human subject. Professor Aistars reminded the panel of a case involving model Emily Ratajkowski posting on social media a photograph taken of her by paparazzi in which she had covered her face with a bouquet of flowers. She was then sued for copyright infringement by the photographer. Stech, Tehranian, and Aistars all suggested that this serves as an example where subjects may deserve some rights in photographs taken of them.

Abstract questions surrounding the meaning and value of art and creation continue to force copyright law to tread carefully in providing legal protection to creative expression without becoming a deterministic judge of artistic value. Whether prompt engineers will be considered authors of AI-generated works, whether the ingestion of copyrighted material in training AI models is fair use, and whether the subjects of visual works are entitled to some rights in the images taken of them are all questions at the forefront of IP law in the 21st Century. How Congress and higher courts will address them is not yet known, leaving open the discussion for creatives and lawyers alike to help discern the proper scope of protection for generative AI, its outputs, and the visual arts. As the panelists acknowledged, predictions for the state of policymaking regarding AI are unclear, but there is one certainty. Protecting the rights of artists and their creative expressions must be the driving force behind the application of copyright law to works generated with new technologies.


Additional Resources: