Categories
Patent Law

Professor Tabrez Ebrahim on Artificial Intelligence Inventions

The following post comes from Associate Professor of Law Tabrez Ebrahim of California Western School of Law in San Diego, California.

a pair of glasses, an apple, and a stack of booksBy Tabrez Ebrahim

Artificial intelligence (AI) is a major concern to the United States Patent and Trademark Office (USPTO), for patent theory and policy, and for society. The USPTO requested comments from stakeholders about AI and released a report titled “Public Views on Artificial Intelligence and Intellectual Property Policy.” Patent law scholars have written about AI’s impact on inventorship and non-obviousness, and they have acknowledged that the patent system is vital for the development and use of AI. However, there is prevailing gap and understudied phenomenon of AI on patent disclosure. The Center for Protection for Intellectual Property (CPIP) supported my research in this vein through the Thomas Edison Innovation Fellowship.

In my new paper, Artificial Intelligence Inventions & Patent Disclosure, I claim that AI fundamentally challenges disclosure in patent law, which has not kept up with rapid advancements in AI, and I seek to invigorate the goals that patent law’s disclosure function is thought to serve for society. In so doing, I assess the role that AI plays in the inventive process, how AI can produce AI-generated output (that can be claimed in a patent application), and why it should matter for patent policy and for society. I introduce a taxonomy comprising AI-based tools and AI-generated output that I map with social-policy-related considerations, theoretical justifications and normative reasoning concerning disclosure for the use of AI in the inventive process, and proposals for enhancing disclosure and the impact on patent protection and trade secrecy.

AI refers to mathematical and statistical inference techniques that identify correlations within datasets to imitate decision making. An AI-based invention can be either: (1) an invention that is produced by AI; (2) an invention that applies AI to other fields; (3) an invention that embodies an advancement in the field of AI; or (4) some combination of the aforementioned. I focus on the first of these concerning the use of AI (what I term an “AI-based tool”) to produce output to be claimed as an invention in a patent application (what I term “AI-generated output”).

The use of AI in patent applications presents capabilities that were not envisioned for the U.S. patent system and allows for inventions based on AI-generated output that appear as if they were invented by a human. Inventors may not disclose the use of AI to the USPTO, but even if they were to do so, the lack of transparency and difficulty in replication with the use of AI presents challenges to the U.S. patent system and for the USPTO.

As a result of the use of AI-based tools in the inventive process, inventions may be fictitious or imaginary, but appear as if they had been created by humans (such as in the physical world) and still meet the enablement and written descriptions requirements. These inventions may be considered as being either imaginary, never-achieved, or unworkable to the inventor, but may appear as if they were created, tested, or made workable to reasonable onlookers or to patent examiners.

The current standard for disclosure in patent law is the same for an invention produced by the use of AI as any invention generated by a human being without the use of AI. However, the use of AI in the inventive process should necessitate a reevaluation of patent law’s disclosure function because: (1) AI can produce a volume of such fictitious or imaginary patent applications (that meet enablement and written descriptions requirements) that would stress the USPTO and the patent system; and (2) advanced AI in the form of deep learning, which is not well understood (due to hidden layers with weights that evolve) may insufficiently describe the making and using of the invention (even with disclosure of diagrams showing a representation of the utilized AI).

Such AI capabilities challenge the current purposes of patent law, and require assessing and answering the following questions for societal reasons: Should patent law embrace the unreal fictitious and imaginary AI-generated output, and if so how can the unreal be detected in patent examination from disclosure of that created by a human? Should inventors be required to disclose the use of AI in the inventive process, and should it matter for society?

Patents are conditioned on inventors describing their inventions, and patent law’s enablement doctrine focuses on the particular result of the invention process. While patent doctrine focuses on the end state and not the tool used in the process of inventing, in contrast, I argue that the use of AI in inventing profoundly and fundamentally challenges disclosure theory in patent law.

AI transforms inventing for two reasons that address the aforementioned reasons for reevaluation of patent law’s disclosure function: (1) The use of an AI-based tool in the invention process can make it appear as if the AI-generated output was produced by a human, when in fact, it was not so; and (2) even if an inventor disclosed the use of an AI-based tool, others may not be able to make or use the invention since the AI-based tool’s operation may not be transparent and replicable. These complexities require enhancing the disclosure requirement, and in so doing, present patent and trade secret considerations for society and for inventors.

The USPTO cannot reasonably expect patent examiners to confirm whether the patent application is for an invention that is fictitious or unexplainable in an era of increasing use of AI-based tools in the inventive process, and heightened disclosure provides a better verification mechanism for society. I argue that enhanced patent disclosure for AI has an important role to play in equilibrating an appropriate level of quid pro quo.

While there are trade-offs to explaining how the applied AI-based tools develop AI-generated output, I argue for: (1) a range of incentive options for enhanced AI patent disclosure, and (2) establishing a data deposit requirement as an alternative disclosure. My article’s theoretical contributions define a framework for subsequent empirical verification of whether an inventor will opt for trade secrecy or patent protection when there is use of AI-based tools in the inventive process, and if so, for which aspects of the invention.

There are a plethora of issues that the patent system and the USPTO should consider as inventors continue to use AI, and consideration should be given to disclosure as AI technology develops and is used even more in the inventive process.

Categories
Copyright

CPIP’s Sandra Aistars and Scalia Law Arts & Entertainment Advocacy Clinic Co-Host Virtual Copyright Event on Arts and the Pandemic

The following post comes from Chris Wolfsen, a recent graduate of Scalia Law and a Research Assistant at CPIP.

flyer for Arts and the Pandemic eventBy Chris Wolfsen

On October 27, 2020, CPIP Director of Copyright Research and Policy Sandra Aistars and students from her Arts & Entertainment Advocacy Clinic at Scalia Law School co-hosted a virtual event with Washington Area Lawyers for the Arts (WALA) and the Copyright Alliance. This virtual copyright clinic, focused on Arts and the Pandemic, provided information to artists wishing to protect their works as well as educators incorporating creative works into distance learning and museums wishing to make work available during the pandemic.

The event kicked off with a testimonial from the Rock Creek King’s Evan Moses, explaining WALA’s assistance in pairing the band with an attorney who advised them in securing the proper permits and asserting their First Amendment rights to perform outdoors during the COVID-19 quarantine for the Save Our Stages Movement. The band continues to raise awareness of the need for legislation to support local music venues during COVID-19 shutdowns. The Rock Creek Kings are calling on both the D.C. City Council and the U.S. Congress to assist these small businesses until they can reopen safely again.

Professor Aistars then turned the event over so her clinic students could answer questions submitted by attendees, with former clinic student, now adjunct professor, Dr. Stephanie Semler moderating. First was an explanation from David Ward on how artists can copyright their artwork. He explained that artists have a copyright as soon as their work is fixed in a tangible medium, but that registering that copyright with the United States Copyright Office will benefit artists. This registration will allow people seeking to license works to find artists and for artists to recover statutory damages in the case of a successful infringement claim. David explained that there are six categories of protectable works and shared a quick walkthrough of the process on https://copyright.gov/.

Bernard Horowitz then fielded a question on the minds of many artists: should I sign a COVID waiver in the studio or for the live event I participate in? Bernard explained that this question is a moving target—COVID waivers have not yet been litigated and Congress has not acted on the matter. He noted that we can look to current personal injury laws to draw conclusions about what may happen with this type of waiver and went on to explain the sharp distinction between the laws in Maryland and D.C. compared to those in Virginia.

Another topic that has been brought to the forefront during the pandemic is collaboration between artists and teachers. Yumi Oda walked attendees through the concept of fair use—a defense to copyright infringement—explaining that teachers using creative works during a live stream of their lesson can take certain measures to improve the likelihood that they will have a good argument that their use of a work in a classroom setting is a fair use. She explained that fair use is a fact-specific inquiry, but that educational use in a classroom setting is a classic example. Yumi also recommended teachers take additional steps such as protecting the lectures, and the creative works within them, with a password and removing student access at the end of the semester.

In a similar vein, Emily Gunberg discussed how museums are adapting to life in a pandemic. Curators have to consider whether they have the right to display a visual work online, which requires making a reproduction. This is a right that does not automatically attach when a museum acquires the physical work for display within the museum building. Emily noted that some types of online postings are usually acceptable, such as thumbnail images due to their lower resolution and limited use. For general availability to the public, such as a feature in a museum’s virtual tour, curators should revisit licensing agreements to ensure they have permission from rights owners.

Heather Uzer was given a more holistic and open-ended question: is the pandemic a catalyst for reform, or a death knell for independent creators? She emphasized that it is within our capacity to use this moment as an opportunity and as a catalyst for change. Artists’ right to protect their works still remain and there are many resources available to help with that protection. WALA, the Copyright Alliance, and many others are committed to protecting independent creators. David Ward also noted that several legislative efforts have gained attention, and that some record labels are holding special days to acknowledge creator efforts and supplement revenue for broadcasting artists by returning all revenues to performing artists on those days.

The event concluded with a performance by the Rock Creek Kings, live, but socially distanced from Crescendo Studios in Falls Church, Virginia. You can watch the video of their performance here on YouTube. One of the owners of Crescendo Studios, Eddie Fuentes, spoke about the importance of art and music in a time where we cannot be physically connected, and Evan, the lead singer of the band, reinforced the message during the performance This has been a year where people have missed weddings, graduations, funerals, and birthdays. Our communities rely on creators now more than ever to bring us together through shared experiences when we are six feet, or even oceans, apart.

Categories
Patent Law Patent Theory

Rethinking § 101: Professor Talha Syed Takes a Different Look at Subject Matter Eligibility

The following post comes from Colin Kreutzer, a 2E at Scalia Law and a Research Assistant at CPIP.

shelves full of booksBy Colin Kreutzer

When most people think of patentability requirements, they think of whether an invention has been “done before.” Novelty and non-obviousness under 35 U.S.C. §§ 102 and 103 are certainly key hurdles to obtaining a patent. But courts are often tied up over the more fundamental issues presented in § 101. That provision addresses patent eligibility itself—whether an idea, new or not, falls within some class of inventions that qualify for patent protection.

Section 101 of the Patent Act of 1952 says that a patent may be issued to “[w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter.” It seems fairly straightforward. But the common law decisions, both pre- and post-1952, are riddled with exceptions and questions that the statute hasn’t resolved on its own. How do we treat intangible notions, such as scientific principles or other abstract ideas? Is a business method a patentable process? What property rights can be granted to an inventor who discovers a fundamental law of nature? And should we treat the realm of pure mathematics as being fundamental to nature itself? As we move deeper into the information age, these questions aren’t getting any simpler.

In a new paper from CPIP’s Thomas Edison Innovation Fellowship program entitled Owning Knowledge: A Unified Theory of Patent Eligibility, Professor Talha Syed of Berkeley Law argues that the confusion surrounding this issue is two-fold. First, it results from our failure to develop a functionality doctrine that can clearly distinguish technological applications of knowledge from other forms of knowledge. Second, he offers a root cause of this failure. There is a distracting preoccupation in patent law with “physicalism,” that is, the notion that a patent is awarded for a thing (tangible or not) rather than for knowledge of that thing.

In order to move forward, Prof. Syed states that we must first unwind the physicalist assumptions that are tangled up in our § 101 analyses. Only then can we develop a functionality doctrine free of those encumbrances. As he puts it, “[r]econstructing eligibility doctrine requires, then, extricating incipient functionality concerns from the physicalist thicket, developing their independent basis, and properly following through on their doctrinal implications.”

On its face, when someone proposes that we fundamentally rethink a subject, it sounds as if they are calling for some kind of seismic shift. But Prof. Syed argues that his proposal merely provides a clearer perspective on a subject that has long needed clarity. Rather than requiring any major statutory reforms or total abandonment of our jurisprudence, he says that the “dephysicalization” of patent eligibility serves to better integrate our existing statutory and common law frameworks.

How We Got Here

The questions mentioned above are not new. To understand the current state of things, Prof. Syed first details some of the cases that led us here, beginning with Neilson v. Harford, Le Roy v. Tatham, Boulton & Watt v. Bull, among others.

Neilson is instructive. It was an 1841 English case that dealt with an improvement to an industrial process. The applicant’s invention involved pre-heating air before it was injected it into an iron furnace. The Court of Exchequer expressed some doubts as to whether this intangible aspect of an industrial process was truly patentable. In the end, it only granted a patent because the inventive principle—that the pre-heated air provides better ignition efficiency than cold air—was embodied in a physical machine designed for that purpose. The patentee “does not merely claim a principle, but a machine embodying a principle.”

Neilson and the others influenced a number of cases in the 20th century American courts. Prof. Syed points to three variants of the general rationale. While a principle on its own is not patent eligible, it becomes so: (1) in an embodied application of the principle, (2) if it is delimited to a specific zone of industry, or (3) in an “inventive” mode of application.

These variants illustrate a schism already appearing by the end of the 19th century as to why an “abstract” idea may be unpatentable. If “abstract” is taken in the dephysicalized sense, then the problem may only be cured by applying the idea in some physical form. Alternatively, it could be abstract in the sense that it is a pure unapplied principle. In this case, patent eligibility may not require a tangible form, but instead some delimitation to a specific industrial zone of applicability. Prof. Syed attributes what came next at least partly on a failure to recognize and fully develop these distinctions.

Principles, in the abstract, were but one concern. The eligibility problem was compounded by similar questions about patent rights over natural phenomena. In the 1948 case of Funk Brothers Seed Co. v. Kalo Inoculant Co., the Supreme Court considered whether a set of naturally occurring bacterial strains—unmodified except by their selective combination—was a patentable invention. They answered “no.” Citing Le Roy, the Funk Bros. Court ruled that “patents cannot issue for the discovery of the phenomena of nature. . . . The qualities of these bacteria, like the heat of the sun, . . . are part of the storehouse of knowledge of all men.” Therefore, an invention based on a natural phenomenon must do more than rely upon “the handiwork of nature” in order to become patentable.

Then the waters got even muddier. At the crossroads of abstract principles and natural phenomena lies the fundamental concept of mathematics itself. In Gottschalk v. Benson, the Court addressed an algorithm for converting signals from one numerical form into a computer-friendly binary form. In Parker v. Flook, it was an algorithm for updating threshold alarm limits while monitoring certain chemical processes. In both cases, the Court held that the claimed invention was not a patentable process, but merely a mathematical formula which belonged to the “basic tools of scientific and technological work.”

However, both cases offered a different manner of distinguishing these abstract concepts from patentable inventions. Benson mentioned, but did not require, that an eligible process should transform a particular article or require a particular machine (much later, the Federal Circuit imposed the “machine-or-transformation test” as the sole test for eligibility, only to have it demoted to “useful tool” by the Supreme Court). Flook, on the other hand, looked for some “inventive” aspect that went beyond the algorithm itself.

The whole story is too convoluted to do it justice here. The strength of these various eligibility bars has fluctuated over the years. But through these cases and more, we arrived at the modern two-part test under Mayo Collab. Svcs v. Prometheus Labs, Ass’n for Molecular Pathology, Inc. v. Myriad Genetics, and Alice Corp. v. CLS Bank.

Quoting Alice, Prof. Syed lays out the test as:

(1) Step 1: First, we ask “whether the claims at issue are directed to” a law of nature, natural phenomenon, or abstract idea.(2) Step 2: If so, “we then ask, ‘what else is there in the claims before us,’” considering “the elements of each claim both individually and ‘as an ordered combination’ to determine whether the additional elements ‘transform the nature of the claim’ into a patent-eligible application.” Step two of this analysis is “a search for an ‘inventive concept.’”

Critique of the Modern Test, and a New Proposal

Prof. Syed cites a fundamental disconnect in this test between its “why” and “how,” that is, why an invention is ineligible at all (claiming “basic building blocks” of scientific and technological work) and how it may become eligible (by containing an “inventive concept in application”). He asks, “[i]f the reason certain spaces of knowledge are ineligible is because they are ‘basic,’ then why doesn’t delimiting the claims to a zone of ‘application’ suffice for eligibility?”

Prof. Syed says that at each stage of our § 101 development, courts have touched on proper functionality issues only to leave them obscured by physicalist notions. He moves to recast the three exceptions as “laws of nature, products of nature, and abstract formulas.” More importantly, he insists that when discussing eligible categories or their exceptions, we preface them with the phrase “knowledge of.” So, knowledge of a law of nature can include discovered knowledge of some natural process. To analyze whether that knowledge should be patent-eligible, we should focus on the functional application of that knowledge. The inventor must go from “knowing that” to “knowing how.”

Prof. Syed uses Neilson as an example to distinguish between understanding (having basic knowledge) and intervening (having applied knowledge). A claim to “[k]nowledge of the principle that heated air promotes ignition better than cold” is not patent eligible. But why? Not because it is an abstract idea, but because it claims the sheer knowledge that some process exists rather than knowledge of how to do something with it. This may not seem like a huge difference from the analysis we have now in the Alice/Mayo test. It isn’t. But it might arguably be a clearer way of looking at things.

Prof. Syed’s version of the Alice framework would ask, at step one, whether the claim seeks to cover knowledge of a law of nature, product of nature, or abstract formula. If so, then step two asks whether the claim delimits the ambit of its coverage to a zone of practical application. Finally, he urges (as others have) that the § 101 analysis be kept separate from analyses of substantive issues under §§ 102 and 103: “claims should be truly evaluated ‘as a whole’ so that, for instance, a claim whose delimited application is obvious, but which also involves a nonobvious basic contribution, should pass matter under § 103 just as easily as one whose delimited application is itself nonobvious.”

Conclusion: Reconstructing § 101

Taking these considerations, Prof. Syed reconstructs § 101 into three patentability requirements for “inventions” or “discoveries” in the “useful arts”:

(1) Such “inventions” or “discoveries” must be conceived in a thoroughly dephysicalized way, as spaces of “knowledge of” a “process, machine, manufacture, or composition of matter.”(2) Such spaces of knowledge must be spaces of “useful”—i.e., applied or functional—knowledge.

(3) Finally, such candidate functional spaces of knowledge still need to satisfy the substantive requirements laid out in the rest of the statute to qualify for protection.

Prof. Syed further recasts the statutory categories of subject matter eligibility. Where the “process, machine, manufacture, or composition of matter” comprises one intangible process category and three tangible product categories, his version is the opposite. There are three distinct process categories and one category for knowledge of physical things:

(1) “Knowledge of” a way of doing something (so-called “functional” claims)(2) “Knowledge of” some thing, its structure, and at least one property (“product” claims)

(3) “Knowledge of” a way of making something (method-of-making process patents)

(4) “Knowledge of” a way of using something (method-of-use process patents)

This reconstruction reflects the two fundamental concepts of Prof. Syed’s paper—physicalism and functionality. In order to see the latter clearly, one must abandon the former. But Prof. Syed says that’s easier said than done: “What gives physicalism its lasting power in patents—what makes the spell linger—is, in fact, precisely functionality: it is precisely because patents properly obtain only in spaces of functional knowledge, or knowing how to do something, that it becomes easy to forget that they nevertheless still only obtain in spaces of functional knowledge, or knowing how to do something.” That could help explain why courts have long struggled with this deceptively complicated subject.

To read the paper, please click here.

Categories
Patent Law

Christa Laser on Patent Law’s Equitable Defenses

The following post comes from Wade Cribbs, a 2L at Scalia Law and a Research Assistant at CPIP.

a pair of glasses, an apple, and a stack of booksBy Wade Cribbs

In patent law, equitable defenses can play an essential role in multi-million-dollar patent infringement cases. Unclean hands, misuse, or estoppel can render a potential verdict unenforceable. CPIP Edison Fellow and Assistant Professor of Law at Cleveland-Marshall College of Law Christa J. Laser dives into the unique and unsettled role of equity in her Edison Fellowship paper, Equitable Defenses in Patent Law, which is forthcoming in the University of Miami Law Review.

Professor Laser compares two theories to determine how courts might interpret undefined language governing equitable defenses in patent statutes, given that the Supreme Court has repeatedly dodged the issue. One interpretive method is a traditional point of view, the faithful agent approach, that advocates for courts to only interpret the statutes’ scope from the legislature’s intent when crafting them. The alternative approach, referred to as dynamic statutory interpretation, suggests that courts should determine what the law ought to be given the vague nature of patent statutes. The unique relationship between patent law and equity and the historically broad language of patent statutes frame the discussion about how courts should interpret equitable patent defenses.

Professor Laser sets the stage by discussing the historical distinction between law and equity before merging in 1938. The distinction in patent law is that most patent claims had been brought in equity since 1870. In the Patent Act of 1870, Congress granted equity courts the ability to award actual damages in addition to equitable remedies, effectively giving equity courts the power of law and equity regarding patents.

This early delegation of patent law to equity courts fits with Congress’s lack of specific rules for patents. Equity has historically been more flexible than law. A common argument is that Congress’s lack of specific rules for patents is to enable a common law approach to patents. Congress establishes the rough outlines of the law while leaving the finer contours for the courts, and flexibility is a necessary trait in establishing these contours. However, even if that was once the case, equity has been constrained to standards that resemble the law. In 1952, Congress amended the Patent Act in an attempt to stabilize the common law approach by codifying it.

The question now is, did Congress codify preexisting decisional law or expand it? Also, what methods should courts take to determine the answer? The confusion and its specific relation to equity arose out of the 1952 amendment removing the specifically delineated equitable defenses established under the 1870 Act and replacing them with a defense of unenforceability. Further confusion arose because Congress eliminated the statement that the listed “defenses may be pleaded in any suit in equity for relief against alleged infringement” and left it to state only that “the following shall be a defense in any action.” While committee notes clarified that unenforceability covered the previously recognized equitable defenses, there was no comment on what “defenses in any action” meant. Previously, all equitable defenses were not understood to bar all claims under both law and equity. For example, estoppel barred claims in law and equity, but laches was limited to actions at law.

Professor Laser proposes three possible interpretations of the “defenses in any action” language. The phrase could mean that the defenses would apply equally in law and equity, that the equitable defenses would only bar claims in equity, or that courts should adopt a case-by-case approach, drawing guidance from prior case law. Professor Laser tosses out the first two alternatives as too stark a change from established law, given the lack of legislative comment. The case-by-case approach is unproblematic when it comes to laches and estoppel because their applications to law and equity claims is well delineated. However, for unclean hands, inequitable conduct, and patent misuse, the case law is much less clear, and the disagreement over statutory interpretation is necessary.

As a case-by-case approach is necessary, Professor Laser outlines the case law before 1952 and the impact the amendment had on unclean hands, inequitable conduct, and patent misuse defenses.

For prior case law on unclean hands, Professor Laser looks at Keystone, Hazel-Atlas, and Precision Instruments. Keystone highlights that unclean hands can serve as a bar to equitable relief when plaintiffs commit acts such as bribery or suppression of evidence. Hazel-Atlas allowed unclean hands to bar a judgment at law in an infringement suit because the plaintiff based both the patent itself and the infringement case upon bribery and fraud. In Precision Instruments, the court dismissed the case because blackmail by the plaintiff was an act related to the cause of action that transgressed equitable principles. The case law before 1952 illustrates that equitable claims are barred when a party commits an unconscionable act related to the cause of action, and legal claims are barred if fraud leads to a legal judgment.

After the 1952 amendment, the Federal Circuit has expanded unclean hands to bar both legal and equitable relief without the related cause of action requirement. In Gilead v. Merck, the Federal Circuit held that unclean hands constituted grounds for summary judgment on both legal and equitable claims where an attorney had given false testimony in support of unethically obtained patent strategy information. Pre-1952 case law did not bar legal relief when the conduct was not inequitable, but in Gilead, the court held that conduct that was unclean hands—but that was not inequitable—still barred legal relief.

Some courts believe that inequitable conduct arose out of the unclean hands doctrine. However, inequitable conduct was a defense against patent claims in the first Patent Act passed in 1790. Inequitable conduct and invalidity overlap heavily. Inequitable conduct requires a deceptive intent with misleading information to render a patent invalid. Invalidity requires no such intent. Previously, the reasoning for pleading inequitable conduct instead of invalidity was that inequitable conduct had infectious invalidity. If a claim was invalid, only that claim was invalid and did not affect any other claim in the patent. However, inequitable conduct in a claim would infect the entire patent making it invalid. Congress removed this distinction in 2011 when it passed the America Invents Act and removed the infectious invalidity result from inequitable conduct. The doctrines is now no longer distinguishable from invalidity.

Patent misuse is improperly using a patent outside of the scope of the patent grant. The Supreme Court, in Continental Paper Bag, held that an unused patent was not a misused patent. On the other hand, in Morton Salt and B.B. Chemical, the Supreme Court found that product tying did qualify as patent misuse, and selling a patented machine cannot be tied to the sale of an unpatented dependent product. Before 1952, patent misuse was only an equitable defense to prevent infringing injunctions. There has not been a patent misuse Supreme Court case since 1952, so how the amendment affected the doctrine has yet to be seen.

The policy arguments supporting a dynamic statutory interpretation are that courts more readily see the impact of policy decisions and therefore are better suited to craft policy. Also, Congress acts too slowly to handle the rapid advancement of the latest technology.

The counterpoint, Professor Laser explains, is that slow and deliberate policymaking is advantageous in patent law. Much of a patent’s value comes from the predictability of patents. A single case changing the landscape of an industry or invalidating hundreds or thousands of patents would ruin faith in the patent industry. Congress is slow because it takes time to gather information in ways that courts cannot. This ability to obtain the necessary knowledge to craft patent legislation is essential in the highly technical application of broad policy. Finally, courts are not policymakers, and judges who are not practiced in crafting policy may prove hesitant or rash in their decisions. While rash decisions have apparent consequences, even inaction will lead to unintentional policy. Judges are accustomed to making decisions based on precedent, not broad policy implications.

Professor Laser closes by suggesting a third strategy. Congress could delegate its authority to an agency to handle the ever-changing patent landscape. This would have the positives of superior access to knowledge and practiced policymakers without Congress’s gridlock and combative lobbying forces. The fact is, until the Supreme Court or Congress provides guidance on how to handle the vague nature of patent statutes, lower courts are left on their own to consider an ever-growing list of factors when determining equity’s impact on patent litigation.

To read Professor Laser’s paper, please click here.