Categories
High Tech Industry

Privacy Law Considerations of AI and Big Data – In the U.S. & Abroad

By Kathleen Wills, Esq.*

Kathleen Wills is a graduate of Antonin Scalia Law School and former C-IP2 RA.

circuit boardArtificial Intelligence and Big Data

While many of us have come to rely on biometrics data when we open our phones with Apple’s “Face ID,” speak to Amazon’s Alexa, or scan our fingerprints to access something, it’s important to understand some of the legal implications about the big data feeding artificial intelligence (AI) algorithms. While “Big Data” refers to processing large-scale and complex data,[1] “biometrics data” refers to the physical characteristics of humans that can be extracted for recognition.[2] AI and biometrics work together in the dynamics as exemplified above, since AI is a data-driven technology and personal data has become propertised.[3] The type and sensitivity of the personal data used by AI depend on the application, and not all applications trace details back to a specific person.[4] The already-active field of Big Data analysis of biometrics working with AI continues to grow, promising to pose challenges and opportunities for consumers, governments, and companies.

A. How AI Uses Big Data

AI works with Big Data to accomplish several different outcomes. For example, AI can use Big Data to recognize, categorize, and find relationships from the data.[5] AI can also work with Big Data to adapt to patterns and identify opportunities so that the data can be understood and put into context. For organizations looking to improve efficiency and effectiveness, AI can leverage Big Data to predict the impact of various decisions. In fact, AI can work with algorithms to suggest actions before they have been deployed, assess risk, and provide feedback in real time from the Big Data pools. When AI works with Big Data and biometrics, AI can perform various types of human recognition for applications in every industry.[6] In other words, the more data AI can process, the more it can learn. Thus, the two rely on each other in order to keep pushing the bounds of technological innovation and machine learning and development.

B. How AI relates to Privacy Laws

Since AI involves analyzing and understanding Big Data, often the type involving biometrics, or personal information, there are privacy considerations and interests to protect. Further, since businesses want access to consumer data in order to optimize the market, governments are placing limits on the use and retention of such data. For some sectors, the boundary between privacy and AI becomes an ethical one. One can immediately imagine the importance of keeping biometric health data private, calling to mind the purpose of HIPAA, the Health Insurance Portability and Accountability Act,[7] even though AI can help doctors better understand patterns in their patients’ health, diagnoses, and even surgeries.

I. United States Privacy Law

A. Federal Privacy Law

 As concerns grow about the privacy and security of data used in AI, there is currently no federal privacy law in the United States. Senators Jeff Merkley and Bernie Sanders proposed the National Biometric Information Privacy Act in 2020, which was not passed into law; it contained provisions such as requiring consent from individuals before collecting information, providing a private right of action for violations, and imposing an obligation to safeguard the identifying information.[8] The act also required private entities to draft public policies and implement mechanisms for destroying information, limit collection of information to valid business reasons, inform individuals that their information is stored, and obtain written releases before disclosure.

B. State Privacy Laws

There are a few states that have passed their own privacy laws or amended existing laws to include protections for biometric data, such as Illinois, California, Washington, New York, Arkansas, Louisiana, Oregon, and Colorado. Other states have pending bills or have tried—and currently, failed—to pass biometric protection regulation.

The first, and most comprehensive, biometric regulation was enacted in 2008: the Illinois Biometric Information Privacy Act (BIPA), which governs collecting and storing biometric information.[9] The biometric law applies to all industries and private entities but exempts the State or any local government agency.[10] BIPA requires entities to inform individuals in writing that their information is being collected and stored and why, and restricts selling, leasing, trading, or profiting from such information. There is a right of action for “any person aggrieved by a violation” in state circuit court or a supplemental claim in federal district court that can yield $1,000 for negligence, and $5,000 for intentional and reckless violations, as well as attorneys’ fees and equitable relief. In 2018-2019, over 200 lawsuits have been reported under BIPA, usually in class action lawsuits against employers.[11]

Texas’s regulation, Chapter 503: Biometric Identifiers, varies greatly from Illinois’s act.[12] Under this chapter, a person can’t commercialize another’s biometric identifier unless they inform the person and receive consent; once consent is obtained, one can’t sell, lease, or disclose that identifier to anyone else unless the individual consents to that financial transaction or such disclosure is permitted by a federal or state statute. The chapter suggests a timeframe for destroying identifiers, sets a maximum of $25,000 civil penalty per violation, and is enforced by the state attorney general. Washington’s legislation, Chapter 19.375: Biometric Identifiers, is similar to Texas’s regulation in that the attorney general can enforce it; however, Washington carved out security purposes to the notice and consent procedures usually required before collecting, capturing, or enrolling identifiers.[13]

California enacted the CCPA, or California Consumer Privacy Act of 2018, which provides a broader definition of “biometric data” and that consumers have the right to know which information is collected and how it’s used, delete that information, and opt-out from the sale of that information.[14] This law applies to entities that don’t have a physical presence in the state but either (a) have a gross annual revenue of over $25 million, (b) buy, receive, or sell the personal information of 50,000 or more California residents, households, or devices, or (c) derive 50% or more of their annual revenue from selling California residents’ personal information.[15] This was amended by the CPRA (the California Privacy Rights and Enforcement Act), which will become effective January 1, 2023, and expands the CCPA.[16] One expansion of the CPRA is a new category of “sensitive personal information” which encompasses government identifiers; financial information; geolocation; race; ethnicity; religious or philosophical beliefs; along with genetic, biometric, health information; sexual orientation; nonpublic communications like email and text messages; and union membership. It also adds new consumer privacy rights including the right to restrict sensitive information and creates a new enforcement authority. Thus, the CRPA brings California’s privacy law closer to the European Union’s General Data Protection Regulation.[17]

New York amended its existing data breach notification law to encompass biometric information into the definition of “private information.”[18] Similar to California’s law, the SHIELD Act applies to all companies holding residents’ data; on the other hand, the SHIELD Act outlines various procedures companies should implement for administrative, technical, and physical safeguards. New York also passed a limited biometric legislation for employers, but there is no private right of action.[19] Similar to New York, Arkansas amended its Personal Information Protection Act so “personal information” now includes biometric data. Louisiana also amended its Data Breach Security Notification Law to do the same, as well as added data security and destruction requirements for entities.[20] Finally, Oregon amended its Information Consumer Protection Act to include protections for biometric data with consumer privacy and data rights.

Most recently, on July 8, 2021, Colorado enacted the Colorado Privacy Act (CPA) after the Governor signed the bill into law.[21] The state Attorney General explains that the law “creates personal data privacy rights” and applies to any person, commercial entity, or governmental entity that maintains personal identifying information. Like consumers in California, consumers in Colorado can opt out from certain provisions of the Act­­—but not all; residents cannot opt out from the unnecessary and irrelevant collection of information, and controllers must receive a resident’s consent before processing personal information. As for remedies, the CAP provides for a 60-day cure period to fix non-compliance of the Act, or controllers will face civil penalties, but consumers do not have a private right of action under this law.

II. International Privacy Law 

Other countries have pioneered data privacy regulations, as exemplified by the European Union’s (EU’s) regulation: General Data Protection Regulation (GDPR).[22] Since 2018, this regulation has been enforced against companies that operate within any EU member state in order to protect “natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.” The GDPR “protects fundamental rights and freedoms of natural persons,” particularly personal data. The regulation is quite comprehensive, with chapters on rights of data subjects, transfers, remedies, and even provisions for particular processing situations such as freedom of expression and information. There are several carve-outs or “exceptions” to the regulation, such as where a citizen gives consent for a specific purpose or the data are necessary for preventative or occupational medicine. Citizens also have “the right to be forgotten” or withdraw consent at any time and can lodge a complaint for violations or seek judicial remedy, compensation, or administrative fines.

Since the GDPR protects data of EU citizens and residents, it has an extraterritorial effect. In January of 2021, the European Data Protection Board (EDPB) adopted written opinions for new standard contractual clauses of the GDPR jointly with the European Data Protection Supervisor. One clause will be for the transfer of personal data between processors to third countries outside of the EU.[23] The transfer of personal data to a third country or international organization may only take place if certain conditions are met, namely following some of the safeguards of European data protection law. However, enforcement of the GDPR is taking time, and Ireland’s data protection commissioner, Helen Dixon, has explained that enforcement goes beyond issuing fines. Interestingly, as Apple, Facebook, Google, LinkedIn, and Twitter are based in Ireland, the country takes the lead in investigating companies.[24]

The GDPR has influenced other countries’ privacy laws. For example, Canada has a federal privacy law, the Personal Information Protection and Electronic Documents Act, and provincial laws that protect personal information in the private sector, which were heavily influenced by the EU’s GDPR.[25] Argentina has begun the legislative process to update its National Data protection regime, and such resolution was passed in January 2019.[26] Further, Brazil’s General Data Protection Law replicates portions of the GDPR and includes extraterritoriality provisions, but it also allows for additional flexibility. The GDPR has also affected the Israeli regulatory enforcement, which has been recognized by the European Commission as an adequate jurisdiction for processing personal information. While the list of countries affected by, or taking notes from, the GDPR is quite extensive, it’s important to note that this is a global challenge and opportunity to protect the privacy of consumers when handling biometrics, Big Data, and using them in AI.

III. Why the Legal Considerations for AI Matter

AI and the usage of Big Data and biometric information in everyday life effect a multitude of individuals and entities. AI can use a consumer’s personal information and, often, highly sensitive information. Misappropriation or violations of that information are enforced against business entities. Governments all over the globe are working to determine which, if any, regulations to pass to protect AI and what the scope of such rules should be. In the U.S., some states require the Attorney General to enforce state privacy laws, while other state laws provide individuals with a private right of action. Interestingly, given the role AI plays in innovation and technology, venture capitalists (VC) might also play a role as the law develops, since VC firms can work with policy makers and lobbyists to determine potential market failure, risk assessments, and benefits from protecting AI and data.[27]

In addition to the individuals, governments, entities, and industries affected by AI and Big Data biometric analysis, there are also legal implications. While this article discusses, at a high level, the international and national privacy law considerations from AI, there are other constitutional and consumer protection laws implicated as well. AI and other uses of Big Data and biometric information have quickly become ingrained in our everyday lives since the first smartphone was created by IBM in 1992. As laws all over the world continue to be discussed, drafted, killed, adopted, or amended, it’s important to understand the importance of AI and the data it uses.


* The information in this article does not, nor is it intended to, constitute legal advice, and has been made available for general information purposes only.

[1] Shafagat Mahmudova, Big Data Challenges in Biometric Technology, 5 J. Education and Management Engineering 15-23 (2016).

[2] Ryan N. Phelan, Data Privacy Law and Intellectual Property Considerations for Biometric-Based AI Innovations, Security Magazine (June 12, 2020).

[3] Gianclaudio Malgieri, Property and (Intellectual) Ownership of Consumers’ Information: A New Taxonomy for Personal Data, 4 Privacy in Germany 133 ff (April 20, 2016).

[4] Jan Grijpink, Privacy Law: Biometrics and privacy, 17 Computer Law & Security Review 154-160 (May 2001).

[5] Jim Sinur and Ed Peters, AI & Big Data; Better Together, Forbes, https://www.forbes.com/sites/cognitiveworld/2019/09/30/ai-big-data-better-together/?sh=5c8ed5f360b3 (Sept. 30, 2019).

[6] Joshua Yeung, What is Big Data and What Artificial Intelligence Can Do?, Towards Data Science, https://towardsdatascience.com/what-is-big-data-and-what-artificial-intelligence-can-do-d3f1d14b84ce (Jan. 29, 2020).

[7] David A. Teich, Artificial Intelligence and Data Privacy – Turning a Risk into a Benefit, Forbes, https://www.forbes.com/sites/davidteich/2020/08/10/artificial-intelligence-and-data-privacy–turning-a-risk-into-a-benefit/?sh=5c4959626a95 (Aug. 10, 2020).

[8] Joseph J. Lazzarotti, National Biometric Information Privacy Act, Proposed by Sens. Jeff Merkley and Bernie Sanders, National Law Review, https://www.natlawreview.com/article/national-biometric-information-privacy-act-proposed-sens-jeff-merkley-and-bernie (Aug. 5, 2020).

[9] Natalie A. Prescott, The Anatomy of Biometric Laws: What U.S. Companies Need to Know in 2020, National Law Review (Jan. 15, 2020).

[10] Biometric Information Privacy Act, 740 ILCS 14 (2008).

[11] Supra note 9.

[12] Tex. Bus. & Com. Code § 503.001 (2009).

[13] Wash. Rev. Code Ann. § 19.375.020 (2017).

[14] California Consumer Privacy Act (CCPA), State of California Department of Justice, https://oag.ca.gov/privacy/ccpa (last accessed May 22, 2021).

[15] Rosenthal et. al., Analyzing the CCPA’s Impact on the Biometric Privacy Landscape, https://www.law.com/legaltechnews/2020/10/14/analyzing-the-ccpas-impact-on-the-biometric-privacy-landscape/ (Oct. 14, 2020).

[16] Brandon P. Reilly and Scott T. Lashway, Client Alert: The California Privacy Rights Act has Passed, Manatt, https://www.manatt.com/insights/newsletters/client-alert/the-california-privacy-rights-act-has-passed (Nov. 11, 2020).

[17] Peter Banyai et al., California Consumer Privacy Act 2.0 – What You Need to Know, JDSupra, https://www.jdsupra.com/legalnews/california-consumer-privacy-act-2-0-93257/ (Nov. 27, 2020).

[18] Samantha Ettari, New York SHIELD Act: What New Data Security Requirements Mean for Your Business, JDSupra, (June 1, 2020).

[19] Supra note 9, referring to N.Y. Lab. Law §201-a.

[20] Kristine Argentine & Paul Yovanic, The Growing Number of Biometric Privacy Laws and the Post-COVID Consumer Class Action Risks for Businesses, JDSupra,  https://www.jdsupra.com/legalnews/the-growing-number-of-biometric-privacy-2648/#:~:text=In%202019%2C%20Arkansas%20also%20jumped,of%20an%20individual’s%20biological%20characteristics.%E2%80%9D (June 9, 2020).

[21] The Colorado Privacy Act: Explained, Beckage, https://www.beckage.com/privacy-law/the-colorado-privacy-act-explained/ (last accessed July 13, 2021); see also Phil Weiser: Colorado Attorney General, Colorado’s Consumer Data Protection Laws: FAQ’s for Business and Government Agencies, https://coag.gov/resources/data-protection-laws/ (last accessed July 13, 2021).

[22] General Data Protection Regulation (GDPR), https://gdpr-info.eu/ (last accessed May 22, 2021).

[23] Update on European Data Protection Law, National Law Review, https://www.natlawreview.com/article/update-european-data-protection-law (Feb. 24, 2021).

[24] Adam Satariano, Europe’s Privacy Law Hasn’t Shown Its Teeth, Frustrating Advocates, New York Times, https://www.nytimes.com/2020/04/27/technology/GDPR-privacy-law-europe.html (April 28, 2020).

[25] Eduardo Soares et al., Regulation of Artificial Intelligence: The Americas and the Caribbean, Library of Congress Legal Reports, https://www.loc.gov/law/help/artificial-intelligence/americas.php (Jan. 2019).

[26] Ius Laboris, The Impact of the GDPR Outside the EU, Lexology, https://www.lexology.com/library/detail.aspx?g=872b3db5-45d3-4ba3-bda4-3166a075d02f (Sept. 17, 2019).

[27] Jacob Edler et al., The Intersection of Intellectual Property Rights and Innovation Policy Making – A Literature Review, WIPO (July 2015).

Categories
Patent Law Patent Theory

Proposed Misuse of Section 1498 Relies on the False Claim that Patents Are Not Property

hand under a lightbulb drawn on a chalkboardBy Kathleen Wills*

The question whether patents are property rights is a continuing and hotly debated topic in IP law. Despite an abundance of scholarship (see here, here, here, here, and here) detailing how intellectual property (“IP”) rights have long been equated with property rights in land and other tangible assets, critics often claim that this “propertarian” view of IP is a recent development. Misconceptions and false claims about patents as property rights have been perpetuated in an echo chamber of recent scholarship, despite a lack of evidentiary support.

Unfortunately, these misleading arguments are now influencing important pharmaceutical patent debates. Specifically, a new push to devalue patent rights through the misapplication of an allegedly obscure and misunderstood statute, Section 1498 in Title 28 of the U.S. Code (“Section 1498”), is now being used to promote price controls. Arguments for this push have gained traction through a recent article whose flawed analysis has subsequently been promoted by popular media outposts. A better understanding of the nature of patents as property reveals the problems in this argument.

The history of Section 1498 clearly contemplates that patents are property subject to the Takings Clause, which reflects a long-standing foundation of patent law as a whole: Patents are private property. In an influential paper, Professor Adam Mossoff established that from the founding of the United States, patents have been grounded in property law theories. While some scholars today argue that the perception of patents began as monopoly privileges, this is only partially correct.

The arguments usually revolve around certain stated views of Thomas Jefferson, but they ignore that his position was actually a minority view at the time. Even when the term “privilege” was used, it reflected the natural rights theory of property that a person owns those things in which he invests labor to create, including labors of the mind. The term did not reflect a discretionary grant revocable at the will of the government. Thus, an issued patent was a person’s property, as good against the government as against anyone else.

To understand the majority perspective of courts in the nineteenth century, it is important to note that James Madison, the author of the Takings Clause, wrote that the “[g]overnment is instituted to protect property of every sort.” What types of property? Courts often used real property rhetoric in patent infringement cases, as seen in Gray v. James. By 1831, the Supreme Court believed that patent rights were protected just like real property in land was protected. In Festo Corp. v. Shoketsu Kinzoku Kogyo Kabushiki Co., the Court established that patent rights represent legitimate expectations similar to property rights in land, which, in turn, are rights secured under the Takings Clause of the Constitution.

This understanding of patents reflected a stark break from the traditions in English law from which American law developed. In England, the “crown-right” granted the government the right to practice a patented invention wherever and however it pleased. In 1843, the Supreme Court in McClurg v. Kingsland explained that while England viewed a patent as “a grant” issued as a “royal favor,” which could not be excluded from the Crown’s use, the American system was intentionally different and patent rights were good against the government. This meant that Congress had to treat patents as vested property rights in the patent owner.

Justice Bradley enumerated this difference between the United States and England in James v. Campbell:

The United States has no such prerogative as that which is claimed by the sovereigns of England, by which it can reserve to itself, either expressly or by implication, a superior dominion and use in that which it grants by letters-patent to those who entitle themselves to such grants. The government of the United States, as well as the citizen, is subject to the Constitution; and when it grants a patent the grantee is entitled to it as a matter of right, and does not receive it, as was originally supposed to be the case in England, as a matter of grace and favor.

As an article by Professor Sean O’Connor explains, this change occasionally caused confusion in American courts when it came to patent owners seeking redress against unauthorized government use. The problem was that there was no single clear mechanism for suing the federal government for injunctive or monetary relief—in fact under sovereign immunity principles, in many cases the plaintiff could not sue the government. Various mechanisms such as implied or quasi contracts were used, but the varying nature of patentees—had they received some government funding leading to their invention or developed it purely outside of government support—complicated things further.

To provide a venue where citizens could sue the government for patent infringement and other claims, Congress created the Court of Claims in 1855. In 1878, the Court of Claims in McKeever v. United States explained that in the United States, patent rights secured the “mind-work which we term inventions,” authorized under the Copyright and Patent Clause in the Constitution. By explaining that patent rights derived from Article I in the Constitution, the Court of Claims suggested that patents were as important as other property rights and thus different from grants. Prof. O’Connor shows that the status of patents as property, and the recognition of this fact by the courts, solved much of the confusion over the history of American patent law.

The Supreme Court went on to affirm the Court of Claims’ decision to award damages to a patentee for an unauthorized governmental use of his patented invention. In United States v. Burns, the Court said that “[t]he government cannot, after the patent is issued, make use of the improvement any more than a private individual, without license of the inventor or making compensation to him.” In James v. Campbell, the Supreme Court echoed this idea when it held that patents confer owners an exclusive property in their invention, and that the government cannot use such an invention without just compensation any more than the government could appropriate land without compensation.

By 1881, it was clear that the courts recognized patents as property rights under constitutional protection from government takings, just like real property. With a strong historical record showing that the Supreme Court equated patents as protected property rights, a question remains: Where does the confusion today stem from?

As Prof. Mossoff explains, the confusion could come from misconstrued inferences of legislative intent regarding the Tucker Act (“Act”). The 1887 version of the Act did not address patents when giving the Court of Claims jurisdiction to hear claims arising from Constitution. This was used by the Federal Circuit in Zoltek Corp. v. United States to deny patents security under the Takings Clause. The Federal Circuit reasoned that patents weren’t constitutional private property. Judge Newman, however, dissented from the petition for rehearing en banc. She highlighted that “[a]lmost a century of precedent has implemented the right of patentees to the remedies afforded to private property taken for public use. There is no basis today to reject this principle.” (The Takings Clause analysis was subsequently vacated when the Federal Circuit eventually took the case en banc.)

An investigation of the Act’s legislative history also leads to a 1910 committee report (H.R. Rep. No. 61-1288), stating that the government’s unauthorized use of patents qualified as a taking. A few years after, the 1918 amendment adjusted the Act’s language to specifically allow patentees to sue the government for unauthorized uses of their property. Thus, the Tucker Act included patent claims in the kind of suits where the government’s unauthorized use was a constitutional issue, appropriately within the Court of Claims’ jurisdiction. Towards the end of the twentieth century, courts continued to hold that patents were constitutionally protected private property.

Modern cases have also confirmed that patents are property protected by the Takings Clause. Chief Justice Roberts, in Horne v. Department of Agriculture, used a patent case for the proposition that the Takings Clause extends to all forms of property, not just real property. Even in Oil States v Greene’s Energy, Justice Thomas went out of his way to assert that the Takings Clause still applies to patents, citing the same case cited by the Chief Justice in Horne.

There has always been a continuous understanding that patents are property, and thus, that Section 1498 is the eminent domain mechanism for the use of patents for the government’s own purposes. Popular media has recently misunderstood Section 1498, but the statute is not a price control statute as detailed in a previous post in this series. Additionally, forthcoming posts in this series will address other such misconceptions surrounding Section 1498.

*Kathleen Wills is a 2L at Antonin Scalia Law School, and she works as a Research Assistant at CPIP