Categories
Copyright

Paradise Rejected: A Conversation about AI and Authorship with Dr. Ryan Abbott

This post comes from Sandra Aistars, Clinical Professor and Director of the Arts & Entertainment Advocacy Clinic at George Mason University, Antonin Scalia Law School, and Senior Fellow for Copyright Research and Policy & Senior Scholar at C-IP2.

2022 Paradise Rejected event flyer
Click on image for full-size PDF flyer.

On March 17, 2022, I had the pleasure to discuss Artificial Intelligence and Authorship with Dr. Ryan Abbott, the lawyer representing Dr. Stephen Thaler, inventor of the “Creativity Machine.” The Creativity Machine is the AI that generated the artwork A Recent Entrance to Paradise, which was denied copyright registration by the United States Copyright Office. Dr. Abbott, Dr. Thaler, and his AI have exhausted all mandatory administrative appeals to the Office and announced that they would soon sue the Office in order to obtain judicial review of the denial.  You can listen to the conversation here. 

Background:  

Dr. Thaler filed an application for copyright registration of A Recent Entrance to Paradise (the Work) on November 3, 2018. For copyright purposes, the Work is categorized as a work of visual art, autonomously generated by the AI without any human direction or intervention. However, it stems from a larger project involving Dr. Thaler’s experiments to design neural networks simulating the creative activities of the human brain. A Recent Entrance to Paradise is one in a series of images generated and described in text by the Creativity Machine as part of a simulated near-death experience Dr. Thaler undertook in his overall research into and invention of artificial neural networks. Thaler’s work also raises parallel issues of patent law and policy which were beyond the scope of our discussion.  

The registration application identified the author of the Work as the “Creativity Machine,” with Thaler listed as the claimant as a result of a transfer resulting from “ownership of the machine.” In his application, Thaler explained to the Office that the Work “was autonomously created by a computer algorithm running on a machine,” and he sought to “register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”[i]

The Copyright Office Registration Specialist reviewing the application refused to register the claim, finding that it “lacks the human authorship necessary to support a copyright claim.”[ii]

Thaler requested that the Office reconsider its initial refusal to register the Work, arguing that “the human authorship requirement is unconstitutional and unsupported by either statute or case law.”[iii] 

The Office re-evaluated the claims and held its ground, concluding that the Work “lacked the required human authorship necessary to sustain a claim in copyright” because Thaler had “provided no evidence on sufficient creative input or intervention by a human author in the Work.”[iv] 

37 CFR 202.5 establishes the Reconsideration Procedure for Refusals to Register by the Copyright Office. Pursuant to this procedure Thaler appealed the refusal to the Copyright Office Review Board comprised of The Register of Copyrights, The General Counsel of the Copyright Office and a third individual sitting by designation. The relevant CFR section requires that the applicant “include the reasons the applicant believes registration was improperly refused, including any legal arguments in support of those reasons and any supplementary information, and must address the reasons stated by the Registration Program for refusing registration upon first reconsideration. The Board will base its decision on the applicant’s written submissions.”  

According to the Copyright Office, Thaler renewed arguments from his first two unsuccessful attempts before the Office that failure to register AI created works is unconstitutional, largely continued to advance policy arguments that registering copyrights in AI generated works would further the underlying goals of copyright law, including the constitutional rationale for protection, and failed to address the Office’s request to cite to case law supporting his assertions that the Office should depart from its reliance on existing jurisprudence requiring human authorship. 

The Office largely dismissed Thaler’s second argument, that the work should be registered as a work made for hire as dependent on its resolution of the first—since the Creativity Machine was not a human being, it could not enter into a “work made for hire” agreement with Thaler. Here, the Office rejected the argument that, because corporations could be considered persons under the law, other non-humans such as AIs should likewise enjoy rights that humans do.  The Office noted that corporations are composed of collections of human beings. The Office also explained that “work made for hire” doctrine speaks only to who the owner of a given work is.   

Of course, both Dr. Abbott and the Copyright Office were bound in this administrative exercise by their respective roles:  the Copyright Office must take the law as it finds it—although Dr. Abbott criticized the Office for applying caselaw from “the Gilded Age” as the Office noted in its rejection “[I]t is generally for Congress,” not the Board, “to decide how best to pursue the Copyright Clause’s objectives.” Eldred v. Ashcroft, 537 U.S. 186, 212 (2003). The Board must apply the statute enacted by Congress; it cannot second-guess whether a different statutory scheme would better promote the progress of science and useful arts.”[v] Likewise, Dr. Abbott, acting on behalf of Dr. Thaler was required to exhaust all administrative avenues of appeal before pursuing judicial review of the correctness of the Office’s interpretation of constitutional and statutory directives, and case law. 

Our lively discussion begins with level setting to ensure that the listeners understand the goals of Dr. Thaler’s project, goals which encompass scientific innovation, artistic creation, and apparently—legal and policy clarification of the IP space.   

Dr. Abbott and I additionally investigate the constitutional rationales for copyright and how registering or not registering a copyright to an AI-created work is or is not in line with those goals. In particular, we debated utilitarian/incentive-based justifications, property rights theories, and how the rights of artists whose works might be used to train an AI might (or might not) be accounted for in different scenarios.  

Turning to Dr. Thaler’s second argument, that the work should be registered to him as a work made for hire, we discussed the difficulties of maintaining the argument separately from the copyrightability question. It seems to me that the Copyright Office is correct that the argument must rise or fall with the resolution of the baseline question of whether a copyrightable work can be authored by an AI to begin with. The other challenging question that Dr. Abbott will face is how to overcome the statutory “work made for hire” doctrine requirements in the context of an AI-created work without corrupting what is intended to be a very narrow exception to the normal operation of copyright law and authorship. This is already a controversial area, and one thought by many to be unfavorable to individual authors because it deems a corporation to be the author of the work, sometimes in circumstances where the human author is not in a bargaining position to adequately understand the copyright implications or to bargain for them differently. In the case of an AI, the ability to bargain for rights or later challenge the rights granted, particularly if they are granted on the basis of property ownership, seems to be dubious. 

In closing the discussion, Dr. Abbott confirmed that his client intends to seek judicial review of the refusal to register. 

 

[i] Opinion Letter of Review Board Refusing Registration to Ryan Abbot (Feb. 14, 2022).

[ii] Id. (Citing Initial Letter Refusing Registration from U.S. Copyright Office to Ryan Abbott (Aug. 12, 2019).)

[iii] Id. (Citing Letter from Ryan Abbott to U.S. Copyright Office at 1 (Sept. 23, 2019) (“First Request”).)

[iv] Id. (Citing Refusal of First Request for Reconsideration from U.S. Copyright Office to Ryan Abbott at 1 (March 30, 2020).)

[v] Id at 4.


In Opposition to Copyright Protection for AI Works

This response to Dr. Ryan Abbott comes from David Newhoff.

On February 14, the U.S. Copyright Office confirmed its rejection of an application for a claim of copyright in a 2D artwork called “A Recent Entrance to Paradise.” The image, created by an AI designed by Dr. Stephen Thaler, was rejected by the Office on the longstanding doctrine which holds that in order for copyright to attach, a work must be the product of human authorship. Among the examples cited in the Copyright Office Compendium as ineligible for copyright protection is “a piece of driftwood shaped by the ocean,” a potentially instructive analog as the debate about copyright and AI gets louder in the near future.

What follows assumes that we are talking about autonomous AI machines producing creative works that no human envisions at the start of the process, other than perhaps the medium. So, the human programmers might know they are building a machine to produce music or visual works, but they do not engage in co-authorship with the AI to produce the expressive elements of the works themselves. Code and data go in, and something unpredictable comes out, much like nature forming the aesthetic piece of driftwood.

As a cultural question, I have argued many times that AI art is a contradiction in terms—not because an AI cannot produce something humans might enjoy, but because the purpose of art, at least in the human experience so far, would be obliterated in a world of machine-made works. It seems that what the AI would produce would be literally and metaphorically bloodless, and after some initial astonishment with the engineering, we may quickly become uninterested in most AI works that attempt to produce more than purely decorative accidents.

In this regard, I would argue that the question presented is not addressed by the “creative destruction” principle, which demands that we not stand in the way of machines doing things better than humans. “Better” is a meaningful concept if the job is microsurgery but meaningless in the creation or appreciation of art. Regardless, the copyrightability question does not need to delve too deeply into the nature or purpose of art because the human element in copyright is not just a paragraph about registration in the USCO Compendium but, in fact, runs throughout application of the law.

Doctrinal Oppositions to Copyright in AI Works

In the United States and elsewhere, copyright attaches automatically to the “mental conception” of a work the moment the conception is fixed in a tangible medium such that it can be perceived by an observer. So, even at this fundamental stage, separate from the Copyright Office approving an application, the AI is ineligible because it does not engage in “mental conception” by any reasonable definition of that term. We do not protect works made by animals, who possess consciousness that far exceeds anything that can be said to exist in the most sophisticated AI. (And if an AI attains true consciousness, we humans may have nothing to say about laws and policies on the other side of that event horizon.)

Next, the primary reason to register a claim of copyright with the USCO is to provide the author with the opportunity, if necessary, to file a claim of infringement in federal court. But to establish a basis for copying, a plaintiff must prove that the alleged infringer had access to the original work and that the secondary work is substantially or strikingly similar to the work allegedly copied. The inverse ratio rule applied by the courts holds that the more that access can be proven, the less similarity weighs in the consideration and vice-versa. But in all claims of copying, independent creation (i.e., the principle that two authors might independently create nearly identical works) nullifies any complaint. These are considerations not just about two works, but about human conduct.

If AIs do not interact with the world, listen to music, read books, etc. in the sense that humans do these things, then, presumably, all AI works are works of independent creation. If multiple AIs are fed the same corpus of works (whether in or out of copyright works) for the purpose of machine learning, and any two AIs produce two works that are substantially, or even strikingly, similar to one another, the assumption should still be independent creation. Not just independent, but literally mindless, unless again, the copyright question must first be answered by establishing AI consciousness.

In principle, AI Bob is not inspired by, or even aware of, the work of AI Betty. So, if AI Bob produces a work strikingly similar to a work made by AI Betty, any court would have to toss out BettyBot v. BobBot on a finding of independent creation. Alternatively, do we want human juries considering facts presented by human attorneys describing the alleged conduct of two machines?

If, on the other hand, an AI produces a work too similar to one of the in-copyright works fed into its database, this begs the question as to whether the AI designer has simply failed to achieve anything more than an elaborate Xerox machine. And hypothetical facts notwithstanding, it seems that there is little need to ask new copyright questions in such a circumstance.

The factual copying complication raises two issues. One is that if there cannot be a basis for litigation between two AI creators, then there is perhaps little or no reason to register the works with the Copyright Office. But more profoundly, in a world of mixed human and AI works, we could create a bizarre imbalance whereby a human could infringe the rights of a machine while the machine could potentially never infringe the rights of either humans or other machines. And this is because the arguments for copyright in AI works unavoidably dissociate copyright from the underlying meaning of authorship.

Authorship, Not Market Value, is the Foundation of Copyright

Proponents of copyright in AI works will argue that the creativity applied in programming (which is separately protected by copyright) is coextensive to the works produced by the AIs they have programmed. But this would be like saying that I have claim of co-authorship in a novel written by one of my children just because I taught them things when they were young. This does not negate the possibility of joint authorship between human and AI, but as stated above, the human must plausibly argue his own “mental conception” in the process as a foundation for his contribution.

Commercial interests vying for copyright in AI works will assert that the work-made-for-hire (WMFH) doctrine already implicates protection of machine-made works. When a human employee creates a protectable work in the course of his employment, the corporate entity, by operation of law, is automatically the author of that work. Thus, the argument will be made that if non-human entities called corporations may be legal authors of copyrightable works, then corporate entities may be the authors of works produced by the AIs they own. This analogizes copyrightable works to other salable property, like wines from a vineyard, but elides the fact that copyright attaches to certain products of labor, and not to others, because it is a fiction itself whose medium is the “personality of the author,” as Justice Holmes articulated in Bleistein.

The response to the WMFH argument should be that corporate-authored works are only protected because they are made by human employees who have agreed, under the terms of their employment, to provide authorship for the corporation. Authorship by the fictious entity does not exist without human authorship, and I maintain that it would be folly to remove the human creator entirely from the equation. We already struggle with corporate personhood in other areas of law, and we should ask ourselves why we believe that any social benefit would outweigh the risk of allowing copyright law to potentially exacerbate those tensions.

Alternatively, proponents of copyright for AI works may lobby for a sui generis revision to the Copyright Act with, perhaps, unique limitations for AI works. I will not speculate about the details of such a proposal, but it is hard to imagine one that would be worth the trouble, no matter how limited or narrow. If the purpose of copyright is to proscribe unlicensed copying (with certain limitations), we still run into the independent creation problem and the possible result that humans can infringe the rights of machines while machines cannot infringe the rights of humans. How does this produce a desirable outcome which does not expand the outsize role giant tech companies already play in society?

Moreover, copyright skeptics and critics, many with deep relationships with Big Tech, already advocate a rigidly utilitarian view of copyright law, which is then argued to propose new limits on exclusive rights and protections. The utilitarian view generally rejects the notion that copyright protects any natural rights of the author beyond the right to be “paid something” for the exploitation of her works, and this cynical, mercenary view of authors would likely gain traction if we were to establish a new framework for machine authorship.

Registration Workaround (i.e., lying)

In the meantime, as Stephen Carlisle predicts in his post on this matter, we may see a lot of lying by humans registering works that were autonomously created by their machines. This is plausible, but if the primary purpose of registration is to establish a foundation for defending copyrights in federal court, the prospect of a discovery process could militate against rampant falsification of copyright applications. Knowing misrepresentation on an application is grounds for invalidating the registration, subject to a fine of up to $2,500, and further implies perjury if asserted in court.

Of course, that’s only if the respondent can defend himself. A registration and threat of litigation can be enough to intimidate a party, especially if it is claimed by a big corporate tech company. So, instead of asking whether AI works should be protected, perhaps we should be asking exactly the opposite question: How do we protect human authorship against a technology experiment, which may have value in the world of data science, but which has nothing to do with the aim of copyright law?

 About the IP Clause

And with that statement, I have just implicated a constitutional argument because the purpose of copyright law, as stated in Article I Clause 8, is to “promote science.” Moreover, the first three subjects of protection in 1790—maps, charts, and books—suggest a view at the founding period that copyright’s purpose, twinned with the foundation for patent law, was more pragmatic than artistic.

Of course, nobody could reasonably argue that the American framers imagined authors as anything other than human or that copyright law has not evolved to encompass a great deal of art which does not promote the endeavor we ordinarily call “science.” So, we may see AI copyright proponents take this semantic argument out for a spin, but I do not believe it should withstand scrutiny for very long.

Perhaps, the more compelling question presented by the IP clause, with respect to this conversation, is what it means to “promote progress.” Both our imaginations and our experiences reveal technological results that fail to promote progress for humans. And if progress for people is not the goal of all law and policy, then what is? Surely, against the present backdrop in which algorithms are seducing humans to engage in rampant, self-destructive behavior, it does seem like a mistake to call these machines artists.

Categories
High Tech Industry

Privacy Law Considerations of AI and Big Data – In the U.S. & Abroad

By Kathleen Wills, Esq.*

Kathleen Wills is a graduate of Antonin Scalia Law School and former C-IP2 RA.

circuit boardArtificial Intelligence and Big Data

While many of us have come to rely on biometrics data when we open our phones with Apple’s “Face ID,” speak to Amazon’s Alexa, or scan our fingerprints to access something, it’s important to understand some of the legal implications about the big data feeding artificial intelligence (AI) algorithms. While “Big Data” refers to processing large-scale and complex data,[1] “biometrics data” refers to the physical characteristics of humans that can be extracted for recognition.[2] AI and biometrics work together in the dynamics as exemplified above, since AI is a data-driven technology and personal data has become propertised.[3] The type and sensitivity of the personal data used by AI depend on the application, and not all applications trace details back to a specific person.[4] The already-active field of Big Data analysis of biometrics working with AI continues to grow, promising to pose challenges and opportunities for consumers, governments, and companies.

A. How AI Uses Big Data

AI works with Big Data to accomplish several different outcomes. For example, AI can use Big Data to recognize, categorize, and find relationships from the data.[5] AI can also work with Big Data to adapt to patterns and identify opportunities so that the data can be understood and put into context. For organizations looking to improve efficiency and effectiveness, AI can leverage Big Data to predict the impact of various decisions. In fact, AI can work with algorithms to suggest actions before they have been deployed, assess risk, and provide feedback in real time from the Big Data pools. When AI works with Big Data and biometrics, AI can perform various types of human recognition for applications in every industry.[6] In other words, the more data AI can process, the more it can learn. Thus, the two rely on each other in order to keep pushing the bounds of technological innovation and machine learning and development.

B. How AI relates to Privacy Laws

Since AI involves analyzing and understanding Big Data, often the type involving biometrics, or personal information, there are privacy considerations and interests to protect. Further, since businesses want access to consumer data in order to optimize the market, governments are placing limits on the use and retention of such data. For some sectors, the boundary between privacy and AI becomes an ethical one. One can immediately imagine the importance of keeping biometric health data private, calling to mind the purpose of HIPAA, the Health Insurance Portability and Accountability Act,[7] even though AI can help doctors better understand patterns in their patients’ health, diagnoses, and even surgeries.

I. United States Privacy Law

A. Federal Privacy Law

 As concerns grow about the privacy and security of data used in AI, there is currently no federal privacy law in the United States. Senators Jeff Merkley and Bernie Sanders proposed the National Biometric Information Privacy Act in 2020, which was not passed into law; it contained provisions such as requiring consent from individuals before collecting information, providing a private right of action for violations, and imposing an obligation to safeguard the identifying information.[8] The act also required private entities to draft public policies and implement mechanisms for destroying information, limit collection of information to valid business reasons, inform individuals that their information is stored, and obtain written releases before disclosure.

B. State Privacy Laws

There are a few states that have passed their own privacy laws or amended existing laws to include protections for biometric data, such as Illinois, California, Washington, New York, Arkansas, Louisiana, Oregon, and Colorado. Other states have pending bills or have tried—and currently, failed—to pass biometric protection regulation.

The first, and most comprehensive, biometric regulation was enacted in 2008: the Illinois Biometric Information Privacy Act (BIPA), which governs collecting and storing biometric information.[9] The biometric law applies to all industries and private entities but exempts the State or any local government agency.[10] BIPA requires entities to inform individuals in writing that their information is being collected and stored and why, and restricts selling, leasing, trading, or profiting from such information. There is a right of action for “any person aggrieved by a violation” in state circuit court or a supplemental claim in federal district court that can yield $1,000 for negligence, and $5,000 for intentional and reckless violations, as well as attorneys’ fees and equitable relief. In 2018-2019, over 200 lawsuits have been reported under BIPA, usually in class action lawsuits against employers.[11]

Texas’s regulation, Chapter 503: Biometric Identifiers, varies greatly from Illinois’s act.[12] Under this chapter, a person can’t commercialize another’s biometric identifier unless they inform the person and receive consent; once consent is obtained, one can’t sell, lease, or disclose that identifier to anyone else unless the individual consents to that financial transaction or such disclosure is permitted by a federal or state statute. The chapter suggests a timeframe for destroying identifiers, sets a maximum of $25,000 civil penalty per violation, and is enforced by the state attorney general. Washington’s legislation, Chapter 19.375: Biometric Identifiers, is similar to Texas’s regulation in that the attorney general can enforce it; however, Washington carved out security purposes to the notice and consent procedures usually required before collecting, capturing, or enrolling identifiers.[13]

California enacted the CCPA, or California Consumer Privacy Act of 2018, which provides a broader definition of “biometric data” and that consumers have the right to know which information is collected and how it’s used, delete that information, and opt-out from the sale of that information.[14] This law applies to entities that don’t have a physical presence in the state but either (a) have a gross annual revenue of over $25 million, (b) buy, receive, or sell the personal information of 50,000 or more California residents, households, or devices, or (c) derive 50% or more of their annual revenue from selling California residents’ personal information.[15] This was amended by the CPRA (the California Privacy Rights and Enforcement Act), which will become effective January 1, 2023, and expands the CCPA.[16] One expansion of the CPRA is a new category of “sensitive personal information” which encompasses government identifiers; financial information; geolocation; race; ethnicity; religious or philosophical beliefs; along with genetic, biometric, health information; sexual orientation; nonpublic communications like email and text messages; and union membership. It also adds new consumer privacy rights including the right to restrict sensitive information and creates a new enforcement authority. Thus, the CRPA brings California’s privacy law closer to the European Union’s General Data Protection Regulation.[17]

New York amended its existing data breach notification law to encompass biometric information into the definition of “private information.”[18] Similar to California’s law, the SHIELD Act applies to all companies holding residents’ data; on the other hand, the SHIELD Act outlines various procedures companies should implement for administrative, technical, and physical safeguards. New York also passed a limited biometric legislation for employers, but there is no private right of action.[19] Similar to New York, Arkansas amended its Personal Information Protection Act so “personal information” now includes biometric data. Louisiana also amended its Data Breach Security Notification Law to do the same, as well as added data security and destruction requirements for entities.[20] Finally, Oregon amended its Information Consumer Protection Act to include protections for biometric data with consumer privacy and data rights.

Most recently, on July 8, 2021, Colorado enacted the Colorado Privacy Act (CPA) after the Governor signed the bill into law.[21] The state Attorney General explains that the law “creates personal data privacy rights” and applies to any person, commercial entity, or governmental entity that maintains personal identifying information. Like consumers in California, consumers in Colorado can opt out from certain provisions of the Act­­—but not all; residents cannot opt out from the unnecessary and irrelevant collection of information, and controllers must receive a resident’s consent before processing personal information. As for remedies, the CAP provides for a 60-day cure period to fix non-compliance of the Act, or controllers will face civil penalties, but consumers do not have a private right of action under this law.

II. International Privacy Law 

Other countries have pioneered data privacy regulations, as exemplified by the European Union’s (EU’s) regulation: General Data Protection Regulation (GDPR).[22] Since 2018, this regulation has been enforced against companies that operate within any EU member state in order to protect “natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.” The GDPR “protects fundamental rights and freedoms of natural persons,” particularly personal data. The regulation is quite comprehensive, with chapters on rights of data subjects, transfers, remedies, and even provisions for particular processing situations such as freedom of expression and information. There are several carve-outs or “exceptions” to the regulation, such as where a citizen gives consent for a specific purpose or the data are necessary for preventative or occupational medicine. Citizens also have “the right to be forgotten” or withdraw consent at any time and can lodge a complaint for violations or seek judicial remedy, compensation, or administrative fines.

Since the GDPR protects data of EU citizens and residents, it has an extraterritorial effect. In January of 2021, the European Data Protection Board (EDPB) adopted written opinions for new standard contractual clauses of the GDPR jointly with the European Data Protection Supervisor. One clause will be for the transfer of personal data between processors to third countries outside of the EU.[23] The transfer of personal data to a third country or international organization may only take place if certain conditions are met, namely following some of the safeguards of European data protection law. However, enforcement of the GDPR is taking time, and Ireland’s data protection commissioner, Helen Dixon, has explained that enforcement goes beyond issuing fines. Interestingly, as Apple, Facebook, Google, LinkedIn, and Twitter are based in Ireland, the country takes the lead in investigating companies.[24]

The GDPR has influenced other countries’ privacy laws. For example, Canada has a federal privacy law, the Personal Information Protection and Electronic Documents Act, and provincial laws that protect personal information in the private sector, which were heavily influenced by the EU’s GDPR.[25] Argentina has begun the legislative process to update its National Data protection regime, and such resolution was passed in January 2019.[26] Further, Brazil’s General Data Protection Law replicates portions of the GDPR and includes extraterritoriality provisions, but it also allows for additional flexibility. The GDPR has also affected the Israeli regulatory enforcement, which has been recognized by the European Commission as an adequate jurisdiction for processing personal information. While the list of countries affected by, or taking notes from, the GDPR is quite extensive, it’s important to note that this is a global challenge and opportunity to protect the privacy of consumers when handling biometrics, Big Data, and using them in AI.

III. Why the Legal Considerations for AI Matter

AI and the usage of Big Data and biometric information in everyday life effect a multitude of individuals and entities. AI can use a consumer’s personal information and, often, highly sensitive information. Misappropriation or violations of that information are enforced against business entities. Governments all over the globe are working to determine which, if any, regulations to pass to protect AI and what the scope of such rules should be. In the U.S., some states require the Attorney General to enforce state privacy laws, while other state laws provide individuals with a private right of action. Interestingly, given the role AI plays in innovation and technology, venture capitalists (VC) might also play a role as the law develops, since VC firms can work with policy makers and lobbyists to determine potential market failure, risk assessments, and benefits from protecting AI and data.[27]

In addition to the individuals, governments, entities, and industries affected by AI and Big Data biometric analysis, there are also legal implications. While this article discusses, at a high level, the international and national privacy law considerations from AI, there are other constitutional and consumer protection laws implicated as well. AI and other uses of Big Data and biometric information have quickly become ingrained in our everyday lives since the first smartphone was created by IBM in 1992. As laws all over the world continue to be discussed, drafted, killed, adopted, or amended, it’s important to understand the importance of AI and the data it uses.


* The information in this article does not, nor is it intended to, constitute legal advice, and has been made available for general information purposes only.

[1] Shafagat Mahmudova, Big Data Challenges in Biometric Technology, 5 J. Education and Management Engineering 15-23 (2016).

[2] Ryan N. Phelan, Data Privacy Law and Intellectual Property Considerations for Biometric-Based AI Innovations, Security Magazine (June 12, 2020).

[3] Gianclaudio Malgieri, Property and (Intellectual) Ownership of Consumers’ Information: A New Taxonomy for Personal Data, 4 Privacy in Germany 133 ff (April 20, 2016).

[4] Jan Grijpink, Privacy Law: Biometrics and privacy, 17 Computer Law & Security Review 154-160 (May 2001).

[5] Jim Sinur and Ed Peters, AI & Big Data; Better Together, Forbes, https://www.forbes.com/sites/cognitiveworld/2019/09/30/ai-big-data-better-together/?sh=5c8ed5f360b3 (Sept. 30, 2019).

[6] Joshua Yeung, What is Big Data and What Artificial Intelligence Can Do?, Towards Data Science, https://towardsdatascience.com/what-is-big-data-and-what-artificial-intelligence-can-do-d3f1d14b84ce (Jan. 29, 2020).

[7] David A. Teich, Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit, Forbes, https://www.forbes.com/sites/davidteich/2020/08/10/artificial-intelligence-and-data-privacy–turning-a-risk-into-a-benefit/?sh=5c4959626a95 (Aug. 10, 2020).

[8] Joseph J. Lazzarotti, National Biometric Information Privacy Act, Proposed by Sens. Jeff Merkley and Bernie Sanders, National Law Review, https://www.natlawreview.com/article/national-biometric-information-privacy-act-proposed-sens-jeff-merkley-and-bernie (Aug. 5, 2020).

[9] Natalie A. Prescott, The Anatomy of Biometric Laws: What U.S. Companies Need to Know in 2020, National Law Review (Jan. 15, 2020).

[10] Biometric Information Privacy Act, 740 ILCS 14 (2008).

[11] Supra note 9.

[12] Tex. Bus. & Com. Code § 503.001 (2009).

[13] Wash. Rev. Code Ann. § 19.375.020 (2017).

[14] California Consumer Privacy Act (CCPA), State of California Department of Justice, https://oag.ca.gov/privacy/ccpa (last accessed May 22, 2021).

[15] Rosenthal et. al., Analyzing the CCPA’s Impact on the Biometric Privacy Landscape, https://www.law.com/legaltechnews/2020/10/14/analyzing-the-ccpas-impact-on-the-biometric-privacy-landscape/ (Oct. 14, 2020).

[16] Brandon P. Reilly and Scott T. Lashway, Client Alert: The California Privacy Rights Act has Passed, Manatt, https://www.manatt.com/insights/newsletters/client-alert/the-california-privacy-rights-act-has-passed (Nov. 11, 2020).

[17] Peter Banyai et al., California Consumer Privacy Act 2.0 – What You Need to Know, JDSupra, https://www.jdsupra.com/legalnews/california-consumer-privacy-act-2-0-93257/ (Nov. 27, 2020).

[18] Samantha Ettari, New York SHIELD Act: What New Data Security Requirements Mean for Your Business, JDSupra, (June 1, 2020).

[19] Supra note 9, referring to N.Y. Lab. Law §201-a.

[20] Kristine Argentine & Paul Yovanic, The Growing Number of Biometric Privacy Laws and the Post-COVID Consumer Class Action Risks for Businesses, JDSupra,  https://www.jdsupra.com/legalnews/the-growing-number-of-biometric-privacy-2648/#:~:text=In%202019%2C%20Arkansas%20also%20jumped,of%20an%20individual’s%20biological%20characteristics.%E2%80%9D (June 9, 2020).

[21] The Colorado Privacy Act: Explained, Beckage, https://www.beckage.com/privacy-law/the-colorado-privacy-act-explained/ (last accessed July 13, 2021); see also Phil Weiser: Colorado Attorney General, Colorado’s Consumer Data Protection Laws: FAQ’s for Business and Government Agencies, https://coag.gov/resources/data-protection-laws/ (last accessed July 13, 2021).

[22] General Data Protection Regulation (GDPR), https://gdpr-info.eu/ (last accessed May 22, 2021).

[23] Update on European Data Protection Law, National Law Review, https://www.natlawreview.com/article/update-european-data-protection-law (Feb. 24, 2021).

[24] Adam Satariano, Europe’s Privacy Law Hasn’t Shown Its Teeth, Frustrating Advocates, New York Times, https://www.nytimes.com/2020/04/27/technology/GDPR-privacy-law-europe.html (April 28, 2020).

[25] Eduardo Soares et al., Regulation of Artificial Intelligence: The Americas and the Caribbean, Library of Congress Legal Reports, https://www.loc.gov/law/help/artificial-intelligence/americas.php (Jan. 2019).

[26] Ius Laboris, The Impact of the GDPR Outside the EU, Lexology, https://www.lexology.com/library/detail.aspx?g=872b3db5-45d3-4ba3-bda4-3166a075d02f (Sept. 17, 2019).

[27] Jacob Edler et al., The Intersection of Intellectual Property Rights and Innovation Policy Making – A Literature Review, WIPO (July 2015).

Categories
High Tech Industry Patents

Accenture Report Outlines How 5G Technology Accelerates Economic Growth

The following post comes from Wade Cribbs, a 2L at Scalia Law and a Research Assistant at CPIP.

closeup of a circuit boardBy Wade Cribbs

Everyone in the technology industry knows that 5G is posed to revolutionize the world, but the finer points of 5G’s impact on the U.S. economy are detailed in a new report by Accenture entitled The Impact of 5G on the United States Economy. In the report, Accenture explains how 5G stands to add up to $1.5 trillion to the U.S. GDP and create or transform up to 16 million jobs from 2021 to 2025.

5G’s benefits include enabling the development of new industries, improving current industries, and accommodating the current, rapid growth of interconnected technologies. Autonomous vehicles are only achievable through 5G’s increased broadband, which can handle the large amount of data transferred to and from the sensors on vehicles on the road as they are operating. Furthermore, 5G is necessary to support the expected growth to 29.3 billion devices and 14.7 billion machine-to-machine connections by 2023. To get a better look at the specific impact 5G will have on the coming business and consumer landscape, Accenture focuses on five key business sectors: manufacturing, retail, healthcare, automotive and transportation, and utilities.

As 10,000 baby boomers retire a day, the manufacturing industry is in dire need of some way to meet its labor shortage. Due in part to a lack of interest from the younger generations, manufacturers are increasingly looking to automation. 5G will allow for an unprecedented level of control and synchronization across the warehouse floor. Examples of manufacturing improvements implementable with 5G include: AI assisted asset management utilizing video analytics and attached sensors; connected worker experiences implementing augmented reality to provide workers with a safer work experience and reduced training times; and enhanced quality monitoring through a combination of AI inspection and UHD video streaming monitoring. Accenture estimates that 5G will provide a $349.9 billion increase in sales for manufacturing of the equipment and products necessary to implement 5G in other business sectors.

In the retail sector, 5G can provide the data needed to support frictionless checkout experiences. AI used in combination with UHD video monitoring will allow for customers to be charged when putting items in their basket, eliminating the long lines that 86% of customers say have caused them to leave a store, which in turns leads to $37.7 billion in missed sales annually. Furthermore, this same AI monitoring system can be used to personalize a shopping experience through monitoring customers and alerting sales associates to a customer with a problem without the customer having to find and flag down an associate; the system can also monitor for theft, which costs the retail industry $25 million daily. Overall, Accenture estimates that the retail industry stands to see a $269.5 billion increase in sales due to 5G sales and cost savings.

Healthcare costs are expected to rise from $3.4 trillion to $6 trillion by 2027. As the need for healthcare professionals is expected to outstrip the labor supply, increases to technology and treatment efficiency are essential to address the problems presented by an aging population. The good news is that 5G is suited to address just these issues by eliminating waste, which is estimated to make up as much as 30% of spending. 5G will expand medical professionals’ ability to monitor patients, giving the option for at-home care to a wider range of patients as well as lowering the number of doctors required to monitor intensive care patients. Doctors will also be able to access previously unreachable patients for virtual consultations. No longer will rural Americans have to travel long distances to visit their doctor in the city. 5G will allow online consultants rapid access to vast amounts of data, such as MRI images, CAT scans, ultrasounds, ECGs, and stethoscope data. Accenture estimates that the healthcare industry stands to gain $192.3 billion in economic output and up to 1.7 million jobs.

As vehicles become smarter, safer, and more connected, 5G will enable automobiles to exchange data with other vehicles, the automotive infrastructure, and pedestrians. This will enhance vehicle safety, fleet management, and smart traffic management. The U.S. National Highway Traffic Safety Administration (NHTSA) estimates that the combined impact of vehicle-to-everything communication technology could reduce the severity of 80% of sober multi-vehicle crashes and 70% of crashes involving trucks. 5G video-based telematics will allow for automated vehicle fleets and fleet management capability, such as improved logistics security and goods-condition diagnostics to eliminate the up to 20% of empty cargo space in U.S. trucks. Through smart traffic managing by vehicle-to-vehicle communication and vehicle-to-infrastructure communication, traffic congestion, traffic accidents, and smog due to idling can all be reduced by an expected 15 to 30%. On the whole, Accenture estimates that $217.1 billion in revenue will be generated in the automotive and transportation industry by 5G.

5G will address multiple problems facing the utility industry, including vegetation and asset management, energy supply and resiliency, and next-generation workforces. 5G will allow smart grid technology to be implemented that can track and adapt to real-time disruptions to the power grid. In combination with smart grid technology, smart power plant technology will be able to map out peak power use and wear on equipment to determine optimal times for taking a machine offline for maintenance. Safer work environments can be created for the next generation workforce using augmented and virtual reality to train and eliminate manual methods with digital tools. Accenture estimates that the utility industry stands to grow by $36.9 billion in total sales from the implementation of 5G.

Accenture concludes that 5G is the necessary step towards achieving a new normal through AI, mass machine communications, and digital cloud technology. Every aspect of American life will be affected, and an unprecedented boost will be given to the economy.

To read the report, please click here.

Categories
Patent Law

Professor Tabrez Ebrahim on Artificial Intelligence Inventions

The following post comes from Associate Professor of Law Tabrez Ebrahim of California Western School of Law in San Diego, California.

a pair of glasses, an apple, and a stack of booksBy Tabrez Ebrahim

Artificial intelligence (AI) is a major concern to the United States Patent and Trademark Office (USPTO), for patent theory and policy, and for society. The USPTO requested comments from stakeholders about AI and released a report titled “Public Views on Artificial Intelligence and Intellectual Property Policy.” Patent law scholars have written about AI’s impact on inventorship and non-obviousness, and they have acknowledged that the patent system is vital for the development and use of AI. However, there is prevailing gap and understudied phenomenon of AI on patent disclosure. The Center for Protection for Intellectual Property (CPIP) supported my research in this vein through the Thomas Edison Innovation Fellowship.

In my new paper, Artificial Intelligence Inventions & Patent Disclosure, I claim that AI fundamentally challenges disclosure in patent law, which has not kept up with rapid advancements in AI, and I seek to invigorate the goals that patent law’s disclosure function is thought to serve for society. In so doing, I assess the role that AI plays in the inventive process, how AI can produce AI-generated output (that can be claimed in a patent application), and why it should matter for patent policy and for society. I introduce a taxonomy comprising AI-based tools and AI-generated output that I map with social-policy-related considerations, theoretical justifications and normative reasoning concerning disclosure for the use of AI in the inventive process, and proposals for enhancing disclosure and the impact on patent protection and trade secrecy.

AI refers to mathematical and statistical inference techniques that identify correlations within datasets to imitate decision making. An AI-based invention can be either: (1) an invention that is produced by AI; (2) an invention that applies AI to other fields; (3) an invention that embodies an advancement in the field of AI; or (4) some combination of the aforementioned. I focus on the first of these concerning the use of AI (what I term an “AI-based tool”) to produce output to be claimed as an invention in a patent application (what I term “AI-generated output”).

The use of AI in patent applications presents capabilities that were not envisioned for the U.S. patent system and allows for inventions based on AI-generated output that appear as if they were invented by a human. Inventors may not disclose the use of AI to the USPTO, but even if they were to do so, the lack of transparency and difficulty in replication with the use of AI presents challenges to the U.S. patent system and for the USPTO.

As a result of the use of AI-based tools in the inventive process, inventions may be fictitious or imaginary, but appear as if they had been created by humans (such as in the physical world) and still meet the enablement and written descriptions requirements. These inventions may be considered as being either imaginary, never-achieved, or unworkable to the inventor, but may appear as if they were created, tested, or made workable to reasonable onlookers or to patent examiners.

The current standard for disclosure in patent law is the same for an invention produced by the use of AI as any invention generated by a human being without the use of AI. However, the use of AI in the inventive process should necessitate a reevaluation of patent law’s disclosure function because: (1) AI can produce a volume of such fictitious or imaginary patent applications (that meet enablement and written descriptions requirements) that would stress the USPTO and the patent system; and (2) advanced AI in the form of deep learning, which is not well understood (due to hidden layers with weights that evolve) may insufficiently describe the making and using of the invention (even with disclosure of diagrams showing a representation of the utilized AI).

Such AI capabilities challenge the current purposes of patent law, and require assessing and answering the following questions for societal reasons: Should patent law embrace the unreal fictitious and imaginary AI-generated output, and if so how can the unreal be detected in patent examination from disclosure of that created by a human? Should inventors be required to disclose the use of AI in the inventive process, and should it matter for society?

Patents are conditioned on inventors describing their inventions, and patent law’s enablement doctrine focuses on the particular result of the invention process. While patent doctrine focuses on the end state and not the tool used in the process of inventing, in contrast, I argue that the use of AI in inventing profoundly and fundamentally challenges disclosure theory in patent law.

AI transforms inventing for two reasons that address the aforementioned reasons for reevaluation of patent law’s disclosure function: (1) The use of an AI-based tool in the invention process can make it appear as if the AI-generated output was produced by a human, when in fact, it was not so; and (2) even if an inventor disclosed the use of an AI-based tool, others may not be able to make or use the invention since the AI-based tool’s operation may not be transparent and replicable. These complexities require enhancing the disclosure requirement, and in so doing, present patent and trade secret considerations for society and for inventors.

The USPTO cannot reasonably expect patent examiners to confirm whether the patent application is for an invention that is fictitious or unexplainable in an era of increasing use of AI-based tools in the inventive process, and heightened disclosure provides a better verification mechanism for society. I argue that enhanced patent disclosure for AI has an important role to play in equilibrating an appropriate level of quid pro quo.

While there are trade-offs to explaining how the applied AI-based tools develop AI-generated output, I argue for: (1) a range of incentive options for enhanced AI patent disclosure, and (2) establishing a data deposit requirement as an alternative disclosure. My article’s theoretical contributions define a framework for subsequent empirical verification of whether an inventor will opt for trade secrecy or patent protection when there is use of AI-based tools in the inventive process, and if so, for which aspects of the invention.

There are a plethora of issues that the patent system and the USPTO should consider as inventors continue to use AI, and consideration should be given to disclosure as AI technology develops and is used even more in the inventive process.