Categories
Intellectual Property Theory

Sean O’Connor’s Historical Take on Different Types of Intellectual Property

The following post comes from Professor Camilla Hrdy of Akron Law. It originally appeared on Written Description, and it is reposted here with permission.

a pair of glasses, an apple, and a stack of booksBy Camilla Hrdy

I truly enjoyed Sean O’Connor’s new paper, forthcoming in the George Mason Law Review, called “Distinguishing Different Kinds of Property in Patents and Copyrights.” It is somewhat sui generis. But I guess I would describe it as a ‘legal-historical take on how people perceived intellectual property in the past, with theoretical implications for IP today.’

In O’Connor’s overarching framework, there are two kinds of IP. On one side, are preexisting state-of-nature rights in secret technical know-how and unpublished expression that is maintained through labor and attempts at secrecy. (I’ll call this type 1 IP). On the other side, are state-sanctioned patents and copyrights that are obtained in exchange for revealing those first rights to the public. (I’ll call this type 2 IP).

These two forms of IP are separated from one another by the act of “publication” on the part the author or creator, which transfers the rights from the private to the public sphere; and by the involvement of government, which protects exclusivity in order to encourage the act of publication and transfer to the public sphere.

O’Connor paints this distinction between type 1 and type 2 IP in a historical light, suggesting that the first came first, and the second came second. People have always had “de jure or de facto rights to maintain secrecy and exclusivity of private knowledge and skills[,]” he writes. The second type of property came only later, when “states formalized ad hoc exclusive patent and copyright grants into roughly standardized, deeded, and assignable property.” (3-4)

There is way too much good in this article to give it justice in a summary. It is full of insights on medieval publishing practices and Venetian patents, and is beautifully written. I highly recommend the full article for anyone looking for “something completely different.”

In my read, the big upshot for current IP theory is O’Connor’s view that the historic purpose of patents and copyright was “not to incentivize the authorship or invention of new things. Such creation had been taking place, often quite prodigiously, throughout human history.” (2). Instead, it was to encourage sharing those things by transferring them from the private to the public sphere. “From at least Greco-Roman antiquity,” O’Connor writes, “an important divide was acknowledged between the private and public spheres. An intentional act of publicare was required to transfer something from the private to the public.” (2). He argues that this transfer would not occur as frequently as desirable without patents and copyrights.

The lesson from this historical account, suggests O’Connor, is that if patents and copyrights were abolished or weakened, we might get more secrecy and less publicness.

“[W]e could inadvertently recreate the excessive use of secrecy that arguably hindered progress—in the sense of building off of existing knowledge available in the public sphere—in the time before proto-patents and proto-copyrights emerged in the Renaissance. Robust IP protections, together with appropriate limits on abuses of state-granted exclusive rights, will encourage more creators and innovators to choose the public disclosure and commercialization route.” (54).

 

This normative account seems similar to the oft-stated “exchange for secrets“/disclosure function of patents, as well as the Kitchian commercialization/coordination function. But O’Connor suggests that the “exchange for secrets” premise and its brethren are more central than is appreciated. The message I get is: lest we return to the Dark Ages, we must retain incentives to make secret knowledge and expression public.

Another upshot, albeit not much emphasized in the paper, seems to be that the “monopoly” concern many have with intellectual property is generated primarily when the state takes back what people know or already have access to — in other words, where one person’s patent or copyright impinges on other peoples’ preexisting rights to what they previously knew or used commercially. O’Connor refers to this briefly, writing (in his discussion of post-sale restrictions on chattels and the like)

[t]his view adopted Lord Coke’s sense of “monopoly”—as a legal term of art—as meaning only instances where the state took something back from the public that it previously had (most relevantly, when exclusive rights were given to a few individuals for a commodity or commercial trade that the public freely used or practiced before). (47).

 

I thought this interesting tension between type 1 IP and type 2 IP could have been drawn out more.

The main quibble some people might have with the article is at odds with why I personally like it. I like it because it’s original and out-of-the-box, and isn’t shy about reaching bold normative conclusions and engaging in some speculation. O’Connor sometimes provides extensive insights into what people thought about IP, going back as far as ancient and even hunter-gatherer times. (See, e.g. p. 22). But of course we can’t know that much about what people thought about IP, especially not at the theoretical depth O’Connor engages in, except what we can glean from the sources where certain individuals discussed it. I am often amazed at how many relatively rich accounts we have from Jefferson, Washington, and others during the creation of the American patent and copyright regimes in the late eighteenth century, and it still seems like it’s not enough, with conflicting versions coming out all the time. On the other hand, maybe O’Connor’s willingness to take creative leaps is better than saying “well I can’t tell you what I think they thought because I don’t have the data.”

Categories
Intellectual Property Theory

In Defense of an Inclusive IP Conversation

hand under a lightbulb drawn on a chalkboardIn a recent essay responding to a divisive critique of his book, Justifying Intellectual Property, Robert Merges makes clear from the start that he won’t be pulling any punches. He explains that the purpose of his essay, Against Utilitarian Fundamentalism, is to address the misleading and polarizing conclusions of Mark Lemley’s 2015 article, Faith-Based Intellectual Property, recapitulate the arguments he makes in Justifying IP, and show that those who approach intellectual property theory through a nonstrict empirical lens can still make meaningful contributions to the debate.

Merges exposes the key hypocrisy of Lemely’s article: By flippantly dismissing theories that deviate from his own, Lemley ultimately champions the same inflexible exclusivity he purports to condemn.

The underlying dispute between Lemley and Merges turns on what place nonstrict empirical research has in theoretical IP debates. Philosophical empiricism is the idea that experiment-based evidence is the best path to knowledge. Strict empiricists, such as Lemley, argue that hard data-driven evidence is the only evidence capable of supporting reasonable theoretical conclusions. (As an aside, Lemley’s insistence that strict empirical data is the only evidence worthy of scholarly discussion is curious given his vigorous promotion of theories that lack any empirical support. I should also note that his conclusions regarding the current state of the empirical evidence are controversial in their own right, particularly when combined with a willingness to make broad policy recommendations that ignore important empirical criticisms of the data supporting those recommendations.)

While nonstrict empiricism recognizes the importance of data-driven evidence, it allows for the inclusion of evidence-based investigations into human nature and the way people distinguish right from wrong. This research into the nature of duty and obligation—also known as deontology—is seen by nonstrict empiricists in the IP field as a significant supplement to data-driven evidence in reaching meaningful conclusions on theories of ownership and property.

Strict versus nonstrict empiricism debates—sometimes referred to as utilitarian v. deontological—are not new and are not confined to discussions of ownership and intellectual property. What makes the dispute between Lemley and Merges notable is that Lemley, in disparaging Merges’ approach to research, adopts a rigid position on what evidence merits consideration by IP theorists, making the claim that only hard, data-driven empirical research should influence scholarship. It’s a bold assertion that has led many to offer critical responses to Lemley’s essay (see here, here, here, here, and here), especially considering that the exclusive and narrow-minded approach to IP theory he accuses Merges of supporting is exactly what his essay ultimately promotes.

Lemley Misrepresents Alternatives to Strict Empiricism

Merges first takes issue with Lemley’s critique of nonstrict empirical IP theories as inherently suspect due do their basis “on fundamental commitments that are resistant to counterarguments, particularly empirically-based counterarguments.” Lemley contends that these theories operate with a bias similar to blind religious faith, have an intrinsic disregard for reason, and have no place in scholarly discourse. According to Lemley, they are unscientific and unpersuasive views that cannot be seriously considered in discussions of IP law and policy. In his provocative essay, Lemley uses highly-charged rhetoric to associate nonstrict empirical theorists with religious fundamentalists, resorting to ad hominem attacks and scare tactics to argue that his foundational beliefs are the “one true path.”

Merges points out that by relegating all non-empirical theories into a single, derogatory category, and raising his preferred empirical/utilitarian theory to a “true path to enlightenment” status, Lemley commits the fatal error of promoting an exclusive approach to scholarly discourse. Lemley’s argument has roots in the works of Oliver Wendell Holmes and Richard Posner, who dismissed non-empirical foundations as incapable of being influenced by reason. But Merges repeats that he is “not rejecting empirical evidence of all kinds, but expressing honest doubts about the adequacy of the available evidence,” and that Lemley’s mischaracterization of this skepticism is “more in the way of propaganda than scholarship.”

Research on People’s Moral Intuitions Reveals Significant Shared Judgments About IP

Lemley conflates that which is empirical with that which is rational, assuming there is no empiricism in the study of people’s moral judgments. But Merges notes that deontologists, or those who study the nature of duty and moral obligation are “interested in shared judgments about right and wrong, rather than a strict and exclusive interest in empirical data about the consequences of different courses of action,” and that studies in this field can reveal important theoretical foundations to IP.

Discussing the importance of empirical evidence derived from studies on moral obligations, Merges presents a hypothetical ethical dilemma that has revealed “universal morals” shared across cultures, age groups, and other demographic categories. In the “trolley problem,” a person is presented with the scenario of an out of control trolley headed for a group of five bystanders, and the only way to avoid the trolley killing the entire group is to activate a switch that will divert the trolley to a path that will result in the death of one person. Another variation involves pushing one person in front of the train to save the group of five. Merges notes that researchers found widespread agreement across ethnicities, ages, and backgrounds about which actions were right and wrong, and that rather than based on “irrational institutions,” these judgments are based on a “universal moral grammar” hard-wired in all human beings.

These inherent principles have been directly tied to shared ideas regarding both tangible and intellectual property. Merges describes studies in which children are exposed to a person handling an object, then putting the object down, at which point someone else picks up the object. The children not only routinely infer ownership of tangible goods with first possession, but have also been found to associate creative labor with ownership. As Merges explains, “there are strong regularities in people’s thinking about ownership, fairness, and the importance of creative labor,” and these regularities are “less due to socialization in a particular culture and more due to a basic shared moral sense.”

Though strict empiricists criticize these findings as non-empirical forms of evidence, Merges rejects the idea that strictly empirical studies are the only form of rational and practical investigation. He confronts this argument by showing that empirical evidence focusing solely on consequentialism, or the final net consequence of an action, is not only largely impractical, but often leads to morally reprehensible outcomes.

Illustrating the impractical nature of strict consequentialism, Merges fills nearly an entire page with the myriad potential consequences of both strict and liberal liability standards for internet service providers in identifying copyright infringement. What’s clear is that the vast and complex consequences cannot warrant a “net grand total” conclusion of any kind, and that exercises in attempting to draw causal connections are hopeless. Merges also invokes a hypothetical example of enslaving writers and forcing them produce original content to show that, while it may be the most efficient way to deliver works into the market, it’s a utilitarian approach to IP that’s contrary to the shared moral ideas of right and wrong that societies invoke to reject such operations.

IP Theory Should be Inclusionary and Diverse  

Merges dedicates the end of his essay to a summary and defense of his arguments in Justifying Intellectual Property, including the explicit assertions that his ideas do not “have any claim to exclusivity,” and that his book promotes a “public space” in which differing opinions are not only welcome, but essential to a meaningful debate. For Faith-Based Intellectual Property to insist otherwise leads Merges to question whether Lemley has any real awareness of the work he so vehemently criticizes.

Discussing his pluralistic approach, Merges identifies four “midlevel principles” that serve as a bridge between those with differing foundational commitments to IP. According to Merges, proportionality, efficiency, nonremoval or the public domain, and dignity make up these midlevel principles and create an “overlapping consensus” among scholars and theorists with otherwise conflicting views. Offering an example of this consensus, Merges discusses a recent amicus brief that both he and Mark Lemley cooperated and agreed on. Merges explains that the fact that the two authors could discuss and agree on case outcomes and underlying policy rationales directly refutes Lemley’s conclusion that the sides have “nothing to say to each other.”

Concluding his essay, Merges maintains that Justifying Intellectual Property seeks to formulate a liberal theory of IP while respecting all manners of approaches, and that the book “neither predicts nor expects universal agreement.” And while Merges accepts that some may accuse the book of being wrong, naïve, boring, or didactic, to accuse it of being resistant to inclusive discourse and reasoned arguments reveals an unawareness of its actual content. Lemley’s response is an unfortunate reaction to an open assessment of the ways those with differing views can come together. Ironically, Lemley promotes the very same exclusive approach to scholarly debate that he claims to reject.

 

Categories
Copyright Copyright Licensing Copyright Theory Infringement Intellectual Property Theory Internet Reasonable Royalty Uncategorized

Despite What You Hear, Notice and Takedown is Failing Creators and Copyright Owners

cameraIn a recent op-ed in the LA Times, Professors Chris Sprigman and Mark Lemley praise the notice and takedown provisions of the Digital Millennium Copyright Act (DMCA) as “a bit of copyright law worth saving.” They argue that Section 512 of the DMCA continues to serve its purpose of balancing the rights of copyright owners and creators with those of Internet service providers (ISPs), while leaving both sides only “slightly disappointed.” Satisfying these two groups is indeed a difficult charge, but it’s simply disingenuous to suggest that creators and copyright owners are satisfied with a system so clearly in need of an overhaul.

As the Copyright Office embarks on its review of the DMCA, supporters and critics of the nearly twenty-year-old doctrine are weighing in on its effectiveness in addressing online infringement. Sprigman and Lemley claim that the “process has worked well for years,” and that the result of shifting more enforcement burden to ISPs “could be a broken Internet.” But for those creators and copyright owners who have their works resurface online just minutes after they are taken down, the Internet is already “broken.” The fact that piracy continues to intensify, despite incredible efforts to have infringing content taken down, shows that notice and takedown is largely ineffective.

As CPIP Senior Scholar Sean O’Connor testified before Congress, the notice and takedown system is not working for any of its intended beneficiaries. The constant game of whack-a-mole renders the system essentially futile for copyright owners and creators, and it creates significant burdens for ISPs that want to comply—especially small to mid-level companies that can’t afford compliance staff. Worse still, by shielding service providers from liability, the DMCA creates perverse incentives where there’s little downside to ignoring infringing content. In fact, reviewing content could lead to an ISP having knowledge of infringement and losing its safe harbor.

Now that the Copyright Office’s review is underway, it’s somewhat strange to see some supporters claim that all is well. But has anything actually changed since the Office announced its study?  Of course not. The whack-a-mole problem remains, and the knowledge standards are still interpreted broadly to disproportionately favor ISPs. When one side says the system is working and the other side says it’s broken, the truth is that the system is not working well for everyone. Sprigman and Lemley can claim that the DMCA is “worth saving” only by downplaying the true plight of creators and copyright owners.

A concrete example of this struggle comes from the comments filed by Universal Music Group (UMG) as part of the Copyright Office’s study. UMG describes the painstaking efforts devoted to protect just one artist’s creative work. In October of 2014, UMG and Big Machine Records launched a joint offensive to protect Taylor Swift’s “1989.” A staff of UMG employees dedicated 100% of their time and resources to manually search for infringements on YouTube, SoundCloud, and Tumblr, and through March of 2016, they had sent over 66,000 DMCA takedown notices. Despite their considerable efforts, over 500,000 links to the album were identified, and “1989” was illegally downloaded nearly 1.4 million times from torrent sites.

Of course, this type of effort would be impossible to replicate for any works other than those that attract such massive attention. For most artists, the burden of monitoring the Internet and sending takedown notices would fall entirely on their shoulders, with no guarantee of putting a stop to the theft of their works. Sprigman and Lemley ignore these problems, instead claiming that since copyright owners sent “more than 500 million takedown requests just to Google last year,” we know that the “system is a powerful tool against pirated content.” That would be great, if true, but the reality is that those notices barely made a dent.

Sprigman and Lemley claim that the “genius of the DMCA” is that it “enables entertainment companies to turn piracy into legitimate revenue.” They give the example of “YouTube’s Content ID system,” which “gives copyright owners the opportunity to ‘claim’ their work and share in any advertising revenue rather than pull it off the site.” From the perspective of creators and copyright owners, the only “genius” of this system is that YouTube can legally present them with an unfair choice—suffer infringement and get nothing or monetize and get next to nothing.

While Sprigman and Lemley praise the “more than $1 billion” paid out by YouTube, the real question is how much more copyright owners and creators would have been paid in a properly functioning market. YouTube is consistently teeming with infringing videos—one recent report revealed that over 180 million infringing videos had been removed in 2014 alone. And the artists that YouTube’s largess supposedly benefits are loudly complaining about their exploitation. If Content ID is so great, why are so many creators and copyright owners upset with the arrangement? The monetization Google offers to copyright owners and artists is less than half of the royalties paid out by streaming services like Pandora, an amount that artists have denounced as already inequitable.

In her excellent piece on the fictions of the Content ID system, Grammy-winning artist Maria Schneider exposes Content ID as a way for Google to cash in by actually legitimizing and perpetuating piracy. She explains that a majority of creators that opt for monetization realize miserable percentages of ad revenue, and the continued illegal uploading of their music and content drives billions of users to YouTube’s platform. YouTube has turned the weakness of the DMCA into a system that exploits artists while offering embarrassingly lower royalty rates than what would be negotiated in a free market.

The current situation is untenable, and if change means “breaking” the Internet, then we should pull out the pickaxes and get to work. A system of notice and staydown, rather than just takedown, would help alleviate the constant and seemingly ineffectual vigilance required by the current system. By removing all copies of a protected work and blocking inevitable re-postings, ISPs would honor the original purpose of the DMCA while actually doing their part to earn the protection of the safe harbor provisions. Only by ensuring that targeted works do not resurface will ISPs respect the rights of those without whose content they would cease to exist.

How anyone can honestly say that the current notice and takedown system is working for copyright owners and creators is mystifying given the constant calls for reform from creators and the numerous critical comments filed with the Copyright Office. The incredible magnitude of takedown notices sent and the seemingly unstoppable reappearance of infringing works online are a clear signal that the system is completely failing those it was meant to protect. Creators and copyright owners deserve a better chance at protecting the fruits of their labors, and the DMCA needs to be changed so that it truly is a system “worth saving.”

Categories
Biotech High Tech Industry History of Intellectual Property Innovation Intellectual Property Theory Inventors Legislation Patent Law Patent Litigation Patent Theory Patentability Requirements Software Patent Supreme Court Uncategorized

Federal Circuit Brings Some Clarity and Sanity Back to Patent Eligibility Doctrine

By Adam Mossoff and Kevin Madigan

closeup of a circuit boardFollowing the Supreme Court’s four decisions on patent eligibility for inventions under § 101 of the Patent Act, there has been much disruption and uncertainty in the patent system. The patent bar and most stakeholders in the innovation industries have found the Supreme Court’s decisions in Alice Corp. v. CLS Bank (2014), AMP v. Myriad (2013), Mayo Labs v. Prometheus (2012), and Bilski v. Kappos (2010) to be vague and doctrinally indeterminate. Given the moral panic about the patent system that has been created as a result of ten years of excessive lobbying in D.C. for legislation that weakens patent rights, judges have responded to the excessive discretion they have under these cases by invalidating whole swaths of patented innovation in the high-tech, biotech, and pharmaceutical industries. The Patent Office is also rejecting patent applications at record levels, even for traditional inventions outside of high-tech and life sciences directly affected by the recent § 101 case law.

In Sequenom v. Ariosa, the Supreme Court had the opportunity to bring some clarity to the law of patent eligibility and to reign in some of the judicial and Patent Office excesses, but unfortunately it rejected this opportunity when it denied Sequenom’s cert petition this past June. Fortunately, the Court of Appeals for the Federal Circuit is now taking the lead in providing some much-needed legal guidance on patent eligibility to the inventors and companies working in the innovation industries. In two recent decisions, Enfish v. Microsoft and Rapid Litigation Management v. CellzDirect, the Federal Circuit has set forth some important doctrinal guideposts for defining what counts as a patent-eligible invention. Not only do these two decisions bring some reason and clarity back to the law of patent eligibility under § 101, they provide important doctrinal insights on how stakeholders may wish to address this problem if they ultimately choose to seek relief in Congress.

Enfish and the Patentability of Computer-Implemented Inventions (a/k/a “Software Patents”)

At the time it was decided, some commentators believed that the Alice decision was a directive from on high that most, if not all, computer software programs were not patentable inventions. This was a surprising claim if only because the Alice Court did not once use the phrase “software” in its entire opinion. Of course, “software patent” is not a legal term in patent law; the proper term is “computer-implemented invention,” as used by the Alice Court, and so the Court may have been only avoiding vague rhetoric from the patent policy debates. More important, though, this claim about Alice contradicts the Court’s opinion in Bilski just four years earlier, when the Court warned the Federal Circuit not to adopt a bright-line rule that limited § 101 to only physical inventions of the “Industrial Age,” because this created unnecessary and innovation-killing “uncertainty as to the patentability of software.”

Unfortunately, the ambiguities in Alice and in the Court’s prior patentable subject matter decisions, such as Mayo, left enough discretionary wiggle room in applying the generalized patent-eligibility test to permit judges and patents examiners to wage war on computer-implemented inventions. They thus made real again in the twenty-first century Justice Robert Jackson’s famous observation in 1949 that “the only patent that is valid is one which this Court has not been able to get its hands on.” Jungersen v. Ostby & Barton Co., 335 U.S. 560, 572 (1949) (Jackson, J., dissenting). As one commentator remarked several months after Alice was decided, “It’s open season on software patents.” The data over the next several years has borne out the truth of this statement.

The key argument against patents on computer-implemented inventions, such as key components of word processors, programs that run internet searches (like the patented innovation that started Google), and encryption software, is that such inventions are inherently “abstract.” The judicial interpretation of § 101 has long maintained that abstract ideas, laws of natural, and natural phenomena are unpatentable discoveries. In Alice, for instance, the Court held that a complex software program for extremely complex international financial transactions was an “abstract idea” and thus unpatentable under § 101. But beyond claims that something long known is “abstract,” the Court has failed to define with precision what it means for a discovery to be abstract. With little to no specific guidance from the Alice Court, it is no wonder that judges and examiners have succumbed to the recent moral panic about patents and declared “open season” on patents covering computer-implemented inventions.

In this context, the Federal Circuit’s decision in Enfish v. Microsoft is extremely important because it ends the unreasoned, conclusory “I know it when I see it” rejections of patents as “abstract” by judges and examiners.

In Enfish, the Federal Circuit reversed a trial court’s summary judgment that a patent on a computer-implemented invention was an unpatentable abstract idea. The patent covered a type of database management system on computers, a classic incremental innovation in today’s digital world. In its decision, the trial court dissected the invention down into the most basic ideas in which all inventions can be reframed as representing; for example, methods of using internal combustion engines can easily be reframed in terms of the basic laws in thermodynamics. In this case, the trial court asserted that this patent on a computer-implemented invention covered merely the “abstract purpose of storing, organizing, and retrieving” information. The trial court thus easily concluded that the invention was merely “abstract” and thus unpatentable.

The Federal Circuit rejected the trial court’s conclusory assertion about the invention being “abstract” and further held that such assertions by courts are a legally improper application of § 101. With respect to the patent at issue in this case, Judge Todd Hughes’ opinion for the unanimous panel found that

the plain focus of the claims is on an improvement to computer functionality itself, not on economic or other tasks for which a computer is used in its ordinary capacity. Accordingly, we find that the claims at issue in this appeal are not directed to an abstract idea within the meaning of Alice.

More important, the Enfish court cautioned courts against the methodological approach adopted by the trial court in this case, in which “describing the claims at such a high level of abstraction and untethered from the language of the claims all but ensures that the exceptions to § 101 swallow the rule.” The court recognized that adopting a “bright-line” rule that computer-implemented inventions—the “software patents” decried by critics today—are necessarily “abstract” runs counter to both § 101 and the recent Supreme Court cases interpreting and applying this provision: “We do not see in Bilski or Alice, or our cases, an exclusion to patenting this large field of technological progress.”

Further confirming that Enfish represents an important step forward in how courts properly secure technological innovation in the high-tech industry, the Federal Circuit relied on Enfish in its recent decision in BASCOM Global Services Internet Inc v AT&T Mobility LLC. Here, the Federal Circuit again rejected the trial court’s dissection of a patent claim covering a software program used on the internet into an “abstract” idea of merely “filtering content.” The BASCOM court emphasized that courts must assess a claim as a whole—following the Alice Court’s injunction that courts must assess a patent claim as “an ordered combination of elements”—in determining whether it is a patentable invention under § 101. As numerous patent scholars explained in an amicus brief filed in support of Sequenom in its failed cert petition before the Supreme Court, requiring a court to construe a “claim as a whole” or “the invention as a whole” is a basic doctrinal requirement that runs throughout patent law, as it is essential to ensuring that patents are properly evaluated both as to their validity and in their assertion against infringers.

CellzDirect and the Patentability of Discoveries in the Bio-Pharmaceutical Industry

The high-tech industry is not the only sector of the innovation industries that has been hit particularly hard by the recent §101 jurisprudence. The biotech and pharmaceutical industries have also seen a collapse in the proper legal protection for their innovative discoveries of new therapeutic treatments. One recent study found that the examination unit at the Patent Office responsible for reviewing personalized medicine inventions (art unit 1634) has rejected 86.4% of all patent applications since the Supreme Court’s decision in Mayo. Anecdotal evidence abounds of numerous rejections of patent applications on innovative medical treatments arising from extensive R&D, and the most prominent one was the invalidation of Sequenom’s patent on its groundbreaking innovation in prenatal diagnostic tests.

In this light, the decision on July 5, 2016 in Rapid Litigation Management v. CellzDirect is an extremely important legal development for an industry that relies on stable and effective patent rights to justify investing billions in R&D to produce the miracles that comprise basic medical care today. In CellzDirect, the trial court found unpatentable under § 101 a patent claiming new methods for freezing liver cells for use in “testing, diagnostic, and treating purposes.” The trial court asserted that such a patent was “directed to an ineligible law of nature,” because scientists have long known that these types of liver cells (hepatocytes) could be subjected to multiple freeze-thaw cycles.

In her opinion for a unanimous panel, Chief Judge Sharon Prost held that the method in this case is exactly the type of innovative process that should be secured in a patent. Reflecting the same methodological concern in Enfish and BASCOM, the CellzDirect court rejected the trial court’s dissection of the patent into its foundational “laws of nature” and conventional ideas long-known in the scientific field:

The claims are simply not directed to the ability of hepatocytes to survive multiple freeze-thaw cycles. Rather, the claims of the ’929 patent are directed to a new and useful laboratory technique for preserving hepatocytes. This type of constructive process, carried out by an artisan to achieve “a new and useful end,” is precisely the type of claim that is eligible for patenting.

In other words, merely because a patentable process operates on a subject matter that constitutes natural phenomena does not mean the patent improperly claims either those natural phenomena or the laws of nature that govern them. To hold otherwise fails to heed the Mayo Court’s warning that “all inventions at some level embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas,” and thus to dissect all patents down into these unpatentable foundations would “eviscerate patent law.” The CellzDirect court was explicit about this key methodological point in evaluating patents under § 101: “Just as in [the industrial process held valid by the Supreme Court in] Diehr, it is the particular ‘combination of steps’ that is patentable here”—the invention as a whole.

Conclusion

The U.S. has long prided itself as having a “gold standard” patent system—securing to innovators stable and effective property rights in their inventions and discoveries. As scholars and economic historians have long recognized, the patent system has been a key driver of America’s innovation economy for more than two hundred years. This is now threatened under the Supreme Court’s § 101 decisions and the “too broad” application of the Court’s highly generalized patent-eligibility tests to inventions in the high-tech and bio-pharmaceutical sectors. The shockingly high numbers of rejected applications at the Patent Office and of invalidation of patents by courts, as well as the general sense of legal uncertainty, are threatening the “gold standard” designation for the U.S. patent system. This threatens the startups, new jobs, and economic growth that the patent system has been proven to support. Hopefully, the recent Enfish and CellzDirect decisions are the first steps in bringing back to patent-eligibility doctrine both reason and clarity, two key requirements in the law that have been sorely lacking for inventors and companies working in the innovation economy.

Categories
Administrative Agency Biotech High Tech Industry Innovation Intellectual Property Theory Inventors Legislation Patent Law Patent Litigation Patent Theory Software Patent Statistics Supreme Court Uncategorized

The One Year Anniversary: The Aftermath of #AliceStorm

The following post, by Robert R. Sachs, first appeared on the Bilski Blog, and it is reposted here with permission.

It’s been one year since the Supreme Court’s decision in Alice Corp. v. CLS Bank. On its face the opinion was relatively conservative, cautioning courts to “tread carefully” before invalidating patents, and emphasizing that the primary concern was to avoid preemption of “fundamental building blocks” of human ingenuity. The Court specifically avoided any suggestion that software or business methods were presumptively invalid. But those concerns seem to have gone unheeded. The Court’s attempt to side step the tricky problem of defining the boundary of an exception to patent eligibility—”we need not labor to delimit the precise contours of the ‘abstract ideas category in this case'”—has turned into the very mechanism that is quickly “swallow[ing] all of patent law.” The federal courts, the Patent Trial and Appeal Board, and the USPTO are using the very lack of a definition to liberally expand the contours of abstract ideas to cover everything from computer animation to database architecture to digital photograph management and even to safety systems for automobiles.

Let’s look at the numbers to present an accurate picture of the implications of the Supreme Court’s decision. My analysis is a data-driven attempt to assess the implications of Alice one year out. It is with an understanding of how the Supreme Court’s decision is actually playing out in the theater of innovation that we can better project and position ourselves for what the future holds.

Alice at Court

Table 0 Fed Courts

As of June 19, 2015 there have been 106 Federal Circuit and district court decisions on § 101 grounds, with 76 decisions invalidating the patents at issue in whole or in part. In terms of patents and claims, 65% of challenged patents have been found invalid, along with 76.2% of the challenged claims.

The success rate of motions on the pleadings (including motions to dismiss and judgments on the pleadings) is extremely impressive: 67% of defense motions granted, invalidating 54% of asserted patents. There has never been a Supreme Court ruling that the presumption of validity does not apply to § 101—only the Court’s use of the originally metaphorical notion that eligibility is a “threshold” condition. Given that, and the general rule that to survive a motion to dismiss the patentee (historically) need only show that there was a plausible basis that the complaint states a cause of action— there is a plausible basis that the patent claim is not directed to an abstract idea, law of nature, or natural phenomena. One would be forgiven for thinking, as did former Chief Judge Rader in Ultramercial, LLC v. Hulu, LLC that a “Rule 12(b)(6) dismissal for lack of eligible subject matter will be the exception, not the rule.” Apparently the rules change in the middle of the game.

Turning specifically to the Federal Circuit, the numbers are stark:

Table 00Fed Circuit

Of the 13 decisions, 11 are in software or e-commerce and only two are in biotech. The one case where the court held in favor of the patentee, DDR Holdings, LLC v. Hotels.com, L.P. appeared to offer a narrow avenue for patentees to avoid invalidation. However, only nine district court opinions have relied upon DDR to find patent eligibility, with over 30 court opinions distinguishing DDR as inapplicable. Even more interesting is the fact that in DDR the Federal Circuit essentially held that creating a website that copies the look and feel of another website is patent eligible. In the Silicon Valley, that’s called phishing, and it’s not a technology in which most reputable companies invest.

Alice at the Office

The impact of Alice is similarly impacting practitioners before the USPTO. In December, 2014 the Office issued its Interim Guidance on Patent Subject Matter Eligibility, providing guidance to patent examiners as to how to apply the Alice, Mayo, and Myriad decisions along with various Federal Circuit decisions, to claims during prosecution. Importantly, the Guidance noted that “the Supreme Court did not create a per se excluded category of subject matter, such as software or business methods, nor did it impose any special requirements for eligibility of software or business methods,” and it reminded examiners that “Courts tread carefully in scrutinizing such claims because at some level all inventions embody, use, reflect, rest upon, or apply a law of nature, natural phenomenon, or abstract idea.” Alas, most patent examiners are acting as if the patent applications before them are the exceptions to these cautionary instructions.

With the assistance of Patent Advisor, I compiled a dataset of almost 300,000 office actions and notice of allowances sampled in two week periods during 2013, 2013, 2014 and early 2015, and all actions during March, April and May 2015, across all technology centers:

Table0 Number of Apps

About 100,000 actions were notices of allowances, leaving about 200,000 office actions. Each office action was coded as to whether it included rejections under §§ 101, 102 and 103. For each office action the art unit and examiner was identified as well, and the status of the application (abandoned, pending or patented) as of the date that the data was obtained. I then analyzed the data for office actions rejections based on § 101, allowance rates, and examiner rejection rates. Here’s what I found.

Percent of all Actions with § 101 Rejections

Table2

Here, we have the percentage of all actions in each period that received a § 101 rejection, considering both rejections issued and notices of allowances. The black line separates pre-Alice from post-Alice data. For example, in TC 1600, the biotech area, in January, 2012 6.81% of all actions issued (counting both office actions and notices of allowances) were office actions with § 101 rejections; by May 2015 that percentage almost doubled to 11.86% of actions.

Overall, data shows that in 2012 subject matter rejections were mainly in the computer related Tech Centers (2100, 2400) and began declining thereafter, while escalating in biotechnology (1600) and so-called “business methods” Tech Center, TC 3600, following Mayo and Alice. Other technology centers such as semiconductors and mechanical engineering had essentially low and constant rejection rates. But that’s not because there are no software patents in these technology centers: you find plenty of software patents in these groups. Rather, my view is that it is because examiners in these groups treat software patents as they do any other technology.

The rejection rates in Tech Center 3600 in the 30-40% range are higher than any other group, but they also mask what’s really going on, since TC 3600 covers more than business methods. Tech Center 3600 has nine work groups:

Percent of all Actions with § 101 Rejections in TC 3600 Work Groups

Table3 Ecomm Rej

In TC 3600 most of the work groups handle good old-fashioned machines and processes, such as transportation (3610), structures like chairs and ladders (3630), airplanes, agriculture, and weapons (3640), wells and earth moving equipment (3670), etc. Three work groups handle e-commerce applications: specifically, 3620, 3680 and 3690. Here we see that these groups have significantly higher § 101 rejections than the rest of TC 3600. But let’s drill down further.

Each of work groups 3620, 3680 and 3690 have between five and 10 individual art units that handle specific types of e-commerce technologies, but they are not all under the same work group. For example business related cryptography is handed by both art units 3621 and 3685; healthcare and insurance is handled by art units 3626 and 3686; operations research is handled in 3623, 3624, 3682 and 3684. If we consolidate the data according to technology type and then look at rates of § 101 rejections we get the following:

Percent of all Actions with § 101 Rejections in E-Commerce Art Units by Technology Type

Table3 Ecomm Rej

What’s going on? After Bilski in 2010, the § 101 rejections were running between 17% and 50%. Not great but tolerable since these were mostly formal and were overcome with amendments adding hardware elements (“processor,” “memory”) to method claims or inserting “non-transitory” into Beauregard claims.

But after Alice, everything changed and § 101 rejections started issuing like paper money in a hyperinflation economy. If your perception as a patent prosecutor was that the every application was getting rejected under § 101, this explains your pain. Here’s another view of this data, in terms of actual number of § 101 rejections per sample period:

Number of Office Actions with § 101 Rejections in E-Commerce Art Units by Technology Type

Table4 Ecomm Rej Nos

Notice here that the number of office actions in March, 2015 fell dramatically, and then in April the flood gates opened and hundreds of actions issued with § 101 rejections. This is consistent with the Office’s statements in January 2015 that it was training examiners in view of the 2014 Interim Guidance, so office actions were being held until the training was completed. Apparently, the training skipped the part about no per se exclusions of business methods.

Now let’s consider notice of allowance rates. First with respect to all Tech Centers.

Percent of Actions that Are Notices of Allowance

Table5 All TCs NOA

This data reflects, of all the actions that were issued in a given period, the percentage that were notices of allowances. (Note here that contrary to the preceding tables, red cells are low percentage, and green cells are high since notices of allowance are good things, not bad things). The numbers look good, with a general increasing trend over time.

Now consider what’s happening in TC 3600’s business methods art units.

Percent of Actions that Are Notices of Allowance in Business Methods

Table6 NOAs in Ecomm

Now the picture is quite different. The rate of NOAs drops dramatically after Alice, especially in finance and banking and operations research. If it seemed that you were no longer getting a NOAs, this is why. The zero percent rate in March, 2015 is a result of the Office holding up actions and NOAs in view of the Interim Guidance training, as mentioned above.

Patents issued in the business methods art units typically are classified in Class 705 for “Data Processing.” I identified all patents with a primary classification in Class 705 since January, 2011, on a month by month basis, to identify year over year trends. Again the black line separates pre-Alice from post-Alice data.

Table7 Class 705 Patents

This table shows a precipitous decline in the number of business method patents issued following Alice, especially year over year. The lag between the June, 2014 Alice decision and the drop off in October 2014 is an artifact of the delay between allowance and issuance, as well as the USPTO’s unprecedented decision to withdraw an unknown number of applications for which the issue fee had already been paid, and issue § 101 rejections. It’s an interesting artifact, as well, that the number of Class 705 patents issued peaked in the month after Alice: you have to remember that these patents were allowed at least three months, and as much as a year, before the Alice decision; it just took a long time to actually get printed as a patent.

Next, we’ll consider abandonment rates, on a comparative basis, looking at the percentages of applications that were ultimately abandoned in relationship to whether or not they received a § 101 rejection. We’ll compare the data from January 2012 to July 2014. Again, consider the entire patent corps:

Percent of Abandoned Applications with Prior § 101 Rejection

Table8 Abandon all TCs

Here we see that of the applications that were abandoned during the respective sample periods, the vast majority did not have a prior § 101 rejection. Only in TC 3600 did the majority shift after Alice with 51.83% applications that received § 101 rejections in July 2014 being subsequently abandoned by May 31, 2015. Again, let’s drill down into the business method art units in TC 3600:

Percent of Abandoned Applications with Prior § 101 Rejection

Table9 Ecomm Abandon

First, prior to Alice, abandonments in the business method units appeared to result more frequently from other than § 101 rejections, typically prior art rejections. This is shown by the fact that the Jan. 2012 “No” column (no prior 101 rejection) is greater than the Jan. 2012 “Yes” column. Then after Alice, there is a huge shift with the vast majority of applications that were abandoned having § 101 rejections, as shown by the July, 2014 “Yes” column. The vast majority of abandonments, upwards of 90%, followed a 101 rejection. That’s applicants essentially giving up over what only a few years ago was a relatively minor hurdle. That’s what happens when you change the rules in the middle of the game. Second, there is also significant differential behavior in the business method areas as compared to the rest of the technology centers after Alice.

Here’s my personal favorite.

Rates of Examiner § 101 Rejections in TC 3600

Table12 Examiner Rates

This table shows the numbers of examiners in the business method art units with respect to the percentage of applications in which they issued § 101 rejections after Alice. The first row shows that during the sampled periods since Alice, 58 business methods examiners issued § 101 rejections in 100% of their applications, for a total of 443 applications. Twenty examiners issued § 101 rejections for between 90% and 99% of their cases, covering 370 applications. In short, 199 examiners issued § 101 rejections more than 70% of the time, covering 3,304 applications or about 70.6% of all applications. This is not “treading carefully.”

We find similar, though less dramatic, trends and variations in TC 1600 which handles biotechnology, pharma, and chemistry.

Percent of all Actions with § 101 Rejections in TC 1600 Work Groups

Table10 1600 101 Rej Rate

The red line separate pre-Mayo/Myriad data from post-Mayo/Myriad, and the increase in the post-period is significant. Here too, the various work groups mask the more significant rejection rates in specific technology areas, with the rejection rate in microbiology first jumping up to 34.6% post-Mayo and steadily climbing to the current 53.2%.

Percent of all Actions with § 101 Rejections in TC 1600 by Technology

Table11 1600 Tech Type Rej

This table breaks down the work groups into technology types, and then these are sorting average rejection rate over the past four months. Following Alice, we see a significant increase in eligibility rejections in bioinformatics related applications–inventions that rely on analysis and identification of biological and genetic information, and which are frequently used in diagnostics and drug discovery. This is especially disconcerting because bioinformatics is critical to the development of new diagnostics, therapies and drugs.

Note as well the enormous spike in rejections for plant related applications from 0% between July 2015 and April 2015, to 50% in May 2015. This is likely a result again of the USPTO’s Interim Guidance which essentially instructed examiners to reject any claim that included any form of a natural product.

At least pesticides and herbicides are safe from Alice, since we definitely need more of those. The irony is that the more pesticides and herbicides that come to market, the more we need bioinformatics inventions to identify and treat conditions potentially resulting from these products.

Alice at the Board

The Patent Trial and Appeal Board has been even more hostile to software and business methods patents under the Covered Business Method review program:

Total Petitions

Petitions Granted

Percent Invalid

PTAB CBM Institution on § 101

72

64

89%

PTAB Final Decisions on § 101

27

27

100%

Covered Business Method review is available for patents that claim “a method, apparatus, or operation used in the practice, administration, or management of a financial product or service.” The Board takes a very broad view of what constitutes a financial product or service: if the patent specification happens to mention that the invention may be used in a financial context such as banking, finance, shopping or the like, then that’s sufficient. The Board has found CBM standing in 91% of petitions, and instituted trial in 89% of petitions asserting § 101 invalidity. Once a CBM trial has been instituted, the odds are heavily in the petitioner’s favor: of the 27 final CBM decisions addressing § 101, the Board has found for the petitioner 100% of the time.

Finally, we look at the Board’s activity in handling ex parte appeals from § 101 rejections for the period of March 1, 2015 to May 30, 2015:

  • 32 Ex Parte Decisions on § 101, with 15 in TC 3600.
  • 28 Affirmances overall, 13 in TC 3600
  • Two Reversals on § 101, both in TC 3600
  • Four New Grounds of Rejection for § 101

Following suit with how the Board is handling CBMs, they are also heavily supporting examiners in affirming § 101 rejections. More disconcerting is the trend of new grounds of rejection under § 101. While only four were issued in this period, there have been several dozen since Alice. In this situation, the applicant has appealed, for example, a § 103 rejection. The Board can reverse the examiner on that rejection, but then sua sponte reject all of the claims under § 101. What are the odds that the examiner will ever allow the case? Close to zero. What are the odds that an appeal back to the Board on the examiner’s next § 101 rejection will be reversed? If the Board’s 100% rate of affirming its CBM institution decisions on § 101 is any indication, then you know the answer.

Conclusions

Looking at the overall context of the Alice decision, it’s my view that Supreme Court did not intend this landslide effect. While they were certainly aware of the concerns over patent trolls and bad patents, they framed their decision not as a broadside against these perceived evils, but as simple extension of Bilski and the question of whether computer implementation of an abstract idea imparts eligibility. At oral argument, the members of the Court specifically asked if they needed to rule on the eligibility of software and they were told by CLS and the Solicitor General that they did not. To the extent that there is broad language in that opinion, it is the cautionary instructions to the courts to avoid disemboweling the patent law from the inside, and the emphasis on preemption of fundamental ideas—not just any ideas—as the core concern of the exclusionary rule. The evidence above shows that these guideposts have been rushed past quite quickly on the way to some goal other than the preservation of intellectual property rights.

If the present trends hold, and I see no reason to suggest that they will not, we will continue to see the zone of patent eligibility curtailed in software (not to mention bio-technology after Mayo and Myriad). Indeed, the more advanced the software technology—the more it takes over the cognitive work once done exclusively by humans, the more seamless it becomes in the fabric of our daily lives—the less patent eligible it is deemed to be by the courts and the USPTO. What technologies will not be funded, what discoveries will not be made, what products will never come to market we do not know. What we do know is this: there is only one law that governs human affairs and that is the law of unintended consequences.

Categories
Biotech Gene Patents Innovation Intellectual Property Theory Inventors Patent Law Patent Litigation Patent Theory Patentability Requirements Supreme Court Uncategorized

Federal Circuit Threatens Innovation: Dissecting the Ariosa v. Sequenom Opinion

By Patent Publius

Earlier this month, the Federal Circuit issued its opinion in Ariosa v. Sequenom, a closely-watched biotechnology case with significant repercussions for patent-eligibility analysis generally. Unfortunately, the Federal Circuit misapplies the Supreme Court’s analytical framework from Mayo v. Prometheus, striking down Sequenom’s important innovation for the prenatal diagnosis of fetal abnormalities. The shame here is that the Mayo opinion itself was unnecessarily broad, and the Federal Circuit has now interpreted it to be even broader.

Section 101 of the Patent Act provides that “[w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter . . . may obtain a patent therefor,” but there are judicial exceptions for “laws of nature, natural phenomenon, and abstract ideas.” Those exceptions are relevant here, where the Federal Circuit considers whether the claimed method of using cell-free fetal DNA (“cffDNA”) to make diagnoses is patentable subject matter.

In the Mayo opinion, the Supreme Court established a two-step analysis for determining whether method claims merely “set forth laws of nature” or instead apply those natural laws with “additional features” so as to become patent-eligible processes. The first step looks at whether the claims are directed to a patent-ineligible law of nature, and the second step looks at whether additional elements “transform the nature of the claim” into something that amounts to more than a claim on the law of nature itself.

Applying Mayo to the case at hand, the Federal Circuit’s analysis of the first step is perfunctory:

In this case, the asserted claims of the ‘540 patent are directed to a multistep method that starts with cffDNA taken from a sample of maternal plasma or serum—a naturally occurring non-cellular fetal DNA that circulates freely in the blood stream of a pregnant woman. . . . It is undisputed that the existence of cffDNA in maternal blood is a natural phenomenon. . . . The method ends with paternally inherited cffDNA, which is also a natural phenomenon. The method therefore begins and ends with a natural phenomenon. Thus, the claims are directed to matter that is naturally occurring.

The Federal Circuit’s conclusion that the method “begins and ends with a natural phenomenon” tells us very little of how this principle is to be applied generally. Certainly, the method begins with a biological sample of maternal plasma or serum that contains paternally-inherited cffDNA, and it makes sense to say that it begins with a natural phenomenon. Of course, everything begins with a natural phenomenon, so this is hardly instructive.

But it’s inaccurate to say that the method simply ends with cffDNA. The method itself takes the miniscule amount of cffDNA found in the sample and exponentially amplifies it to detectable levels. The resulting substance, unlike the beginning sample, gains significant and new utility from a diagnostic perspective. What comes out of the process is an artificially-enriched substance that, unlike the maternal plasma or serum fed into the process, can be used for many diagnostic purposes. That is, the method ends with a substance that is anything but a natural phenomenon.

Applying the second step of the Mayo framework, the Federal Circuit finds that Sequenom’s claimed methods are not significantly transformative:

Like the patentee in Mayo, Sequenom contends that the claimed methods are patent eligible applications of a natural phenomenon, specifically a method for detecting paternally inherited cffDNA. Using methods like PCR to amplify and detect cffDNA was well-understood, routine, and conventional activity in 1997. The method at issue here amounts to a general instruction to doctors to apply routine, conventional techniques when seeking to detect cffDNA. Because the method steps were well-understood, conventional and routine, the method of detecting paternally inherited cffDNA is not new and useful. The only subject matter new and useful as of the date of the application was the discovery of the presence of cffDNA in maternal plasma or serum.

The last sentence is the most perplexing: The “discovery of the presence of cffDNA in maternal plasma or serum” is what sets Sequenom’s method apart from that which was “well-understood, routine, and conventional activity in 1997.” The problem here stems from the Federal Circuit’s failure to consider the claimed method as a whole, as it purportedly sets out to do: “[W]e next consider the elements of each claim both individually and ‘as an ordered combination’ to determine whether additional elements ‘transform the nature of the claim’ into a patent-eligible application.”

Undoubtedly, some parts of Sequenom’s method were already well-known. No one denies, for example, that some of the techniques involved in amplifying and then detecting cffDNA were, in their general features, already conventional activity in the field (e.g., PCR). What makes the Sequenom method patentable is the sum of its parts, that is, the method as a whole that the Federal Circuit acknowledges to contain the new and useful discovery of cffDNA in the maternal plasma or serum.

This is the principal feature of Sequenom’s claimed invention and its central argument throughout the litigation. Yet, the Federal Circuit relegates it to one of “Sequenom’s remaining arguments” and addresses it in a brief paragraph near the end of the opinion, where it inexplicably claims: “This argument implies that the inventive concept lies in the discovery of cffDNA in plasma or serum. Even if so, this is not the invention claimed by the ’540 patent.” On the contrary, this discovery is anything but conventional, and the method as a whole transforms a natural phenomenon into something both artificial and patentable.

Overbroad (and Dangerous) Principles

The overbreadth of the Federal Circuit’s analysis threatens diagnostic methods across the board. If a method of detecting a natural phenomenon is always “directed to” that natural phenomenon, as the Federal Circuit suggests, then all such methods are prima facie patent-ineligible under the first step of the Mayo framework and must fight the uphill battle under its second step. This is particularly troubling since virtually all diagnostic tests detect natural phenomena. Moreover, the Federal Circuit’s application of the second step of the Mayo framework looks at each part of the method individually, ignoring the claimed method as a whole.

Not only is this principle breathtakingly broad in the damage it could cause to the diagnostics industry, it is neither required by, nor even consistent with, the controlling case law. Only claims to natural phenomena are per se patent-ineligible; however, applications of natural phenomena are generally patentable. Detecting a natural phenomenon is not the same thing as the phenomenon itself. It is instead a specific application of that phenomenon. While the Federal Circuit states that applications of natural phenomena are patent-eligible, it quickly proceeds to categorically suggest a principle under which all diagnostic inventions may have one foot in the Section 101 grave.

Another overly-broad principle from the Federal Circuit opinion comes from this statement: “For process claims that encompass natural phenomenon, the process steps are the additional features that must be new and useful.” This may at first seem obvious and uncontroversial, but in the context of the rest of the opinion, it proves quite problematic. The Federal Circuit cites Parker v. Flook as support: “The process itself, not merely the mathematical algorithm, must be new and useful.” But note the subtle distinction between the two quotes. The Supreme Court discussed the “process itself,” while the Federal Circuit discusses the “process steps.”

This distinction has two important effects. First, it is one of many signals in the opinion that demonstrates the Federal Circuit’s improper dissection of the claimed method into its components parts. Rather than consider whether the “process itself” is “new and useful,” as the Flook opinion had done, the Federal Circuit analyzes each step individually. There’s no consideration of how the steps integrate into the process as a whole, and there’s no mention of whether that entire process claims something other than the natural phenomenon itself.

Second, the Federal Circuit looks at each step in a very general way and ignores the details of the steps that confer patent eligibility. For example, the opinion spends much time discussing how routine the PCR method was at the time of filing. But Sequenom never claimed the PCR method itself. The Federal Circuit fails to address Sequenom’s central argument: The claimed method is a new process of detecting cffDNA by devising a novel sample source from which to extract it, namely, maternal plasma or serum. The application and adaptation of known techniques in this inventive way to a newly-discovered sample source is not conventional.

Finally, the most problematic and new principle that may emerge from this opinion is a subtle, yet very significant, extension of Mayo to invalidate claims directed to routine and conventional applications of natural laws. Mayo teaches that the mere addition of what is purely routine and conventional at the time of filing cannot save a claim directed to a law of nature: “In particular, the steps in the claimed processes (apart from the natural laws themselves) involve well-understood, routine, conventional activity previously engaged in by researchers in the field.”

The Federal Circuit appears to exclude from the patent system a routine application of a law of nature, rather than, as Mayo requires, a law of nature to which merely routine activities have been appended. That is, if one skilled in the art could, after being informed of a newly-discovered law of nature, use routine skill to arrive at the claimed invention, then that claimed invention may be invalidated under the Federal Circuit’s reasoning.

This is contrary to Mayo, and it could conceivably invalidate huge swaths of meritorious inventions. Once the principles underlying a new method are known, application of those principles to devise that method will very often be obvious. The Supreme Court has been very consistent in saying that applications of laws of nature are patent-eligible, including those applications that would have been obvious in view of newly-discovered laws of nature. It is a subtle, but important, point to recognize that Mayo did not say the opposite, as the Federal Circuit now interprets it.

The Preemption Question

One potential bright spot in the Federal Circuit’s opinion is its treatment of preemption. Instead of being a test for patent eligibility, preemption is properly understood as being solely a policy underlying eligibility exclusions. It can at most serve as an after-the-fact check on whether an already-reached conclusion of eligibility is consistent with this policy. The Federal Circuit here mostly validates this position:

The Supreme Court has made clear that the principle of preemption is the basis for the judicial exceptions to patentability. Alice, 134 S. Ct at 2354 (“We have described the concern that drives this exclusionary principal as one of pre-emption”). For this reason, questions on preemption are inherent in and resolved by the § 101 analysis. . . . Where a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot.

This may ultimately be a hollow victory, however. The Federal Circuit also says: “While preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility.” The problem here is that it is impossible to ever show complete preemption because it is impossible to know at the time of filing whether something outside the claims could also be conceived. Inventions are, by definition, unforeseeable.

Moreover, allowing anything less than complete preemption to be sufficient to invalidate a claim threatens to invalidate far too much subject matter. By their very nature, patents are preemptive. Allowing courts and patent examiners to freely draw the line between allowable and prohibited levels of preemption invites unpredictable and arbitrary decisions based on personal value judgments. That very problem arose here, where the district court held the claims invalid, at least in part, because they covered what the judge deemed to be “the only commercially viable way of detecting” the embodiment of the law of nature.

The Promising Potential in Judge Linn’s Concurrence

Judge Linn’s concurrence is promising, but it falls short of its full potential. Judge Linn does a better job than the majority in recognizing and understanding the legal significance of the important facts of this case:

[N]o one was amplifying and detecting paternally-inherited cffDNA using the plasma or serum of pregnant mothers. Indeed, the maternal plasma used to be “routinely discarded,” . . . because, as Dr. Evans testified, “nobody thought that fetal cell-free DNA would be present.”

It is encouraging to see that a Federal Circuit judge has finally gone on record to point out the problems caused by ever-broadening applications of Mayo:

I join the court’s opinion invalidating the claims of the ‘540 patent only because I am bound by the sweeping language of the test set out in Mayo Collaborative Services v. Prometheus Laboratories, Inc. . . . In my view, the breadth of the second part of the test was unnecessary to the decision reached in Mayo. This case represents the consequence—perhaps unintended—of that broad language in excluding a meritorious invention from the patent protection it deserves and should have been entitled to retain.

Judge Linn errs, however, in his acquiescence that Mayo requires the majority’s conclusion. Judge Linn’s concurrence generally reads more like a dissent, but he undercuts his own criticism of Mayo and its effects by calling his opinion a “concurrence.” As he laments:

The Supreme Court’s blanket dismissal of conventional post-solution steps leaves no room to distinguish Mayo from this case, even though here no one was amplifying and detecting paternally-inherited cffDNA using the plasma or serum of pregnant mothers.

But the second half of this sentence shows the critical distinction that makes Sequenom’s claims patent-eligible, even in view of Mayo. Unlike the claims analyzed in Mayo, Sequenom’s process is new and not routinely engaged in by researchers in the field. Judge Linn even states the point better elsewhere in his own concurrence:

Unlike in Mayo, the ‘540 patent claims a new method that should be patent eligible. While the instructions in the claims at issue in Mayo had been widely used by doctors—they had been measuring metabolites and recalculating dosages based on toxicity/inefficacy limits for years—here, the amplification and detection of cffDNA had never before been done.

Judge Linn should be praised for critiquing Mayo as bad law that has led to the invalidation of untold meritorious patent claims. Unfortunately, however, he may have unintentionally contributed to the expansive scope of Mayo about which he complains by failing to factually distinguish (and hence cabin) the Supreme Court’s opinion when presented with such a good opportunity to do so.

All told, the Federal Circuit’s opinion in Ariosa v. Sequenom is a predictable, yet unfortunate, application of the Supreme Court’s disastrous reasoning in Mayo. The unintended consequences of the Supreme Court’s opinion have been further realized in the Federal Circuit’s denial of Sequenom’s innovative claimed method for diagnosing fetal abnormalities. Only time will tell how many other innovations will suffer under the Supreme Court’s careless expansion of Section 101’s patent eligibility analysis.

Categories
Innovation Intellectual Property Theory Inventors Legislation Patent Law Patent Litigation Patent Theory Uncategorized

Unintended Consequences of “Patent Reform”: The Customer Suit Exception

U.S. Capitol buildingIn the last two weeks, the House and Senate Judiciary Committees marked up wide-ranging patent legislation ostensibly aimed at combating frivolous litigation by so-called “patent trolls.” But while the stated purpose of the House and Senate bills—H.R. 9 (the “Innovation Act”) and S. 1137 (the “PATENT Act”), respectively—is to combat abusive litigation, a closer look at the actual language of the bills reveals broad provisions that go far beyond deterring frivolous lawsuits. This far-reaching language has raised concerns in the innovation industries that, instead of curbing ambulance-chasing patentees, Congress is preparing to fundamentally weaken the property rights of all inventors, emboldening patent infringers in the process.

The “customer suit exception” or “customer stay” provisions that appear in both bills are particularly troubling. These provisions direct courts to stay patent infringement suits against “retailers” and “end users” in favor of suits involving manufacturers higher up the supply chain. While the basic idea makes sense—we’ve all heard stories of coffee shops being sued for patent infringement because of the Wi-Fi routers they used—the provisions are drafted so broadly and inflexibly that they invite abuse and gamesmanship by infringers at the expense of legitimate patent owners.

Both the Innovation Act and the PATENT Act provide that “the court shall grant a motion to stay at least the portion of the action against a covered customer” that relates “to infringement of a patent involving a covered product or covered process” if certain conditions are met. The first condition in both bills is that the “covered manufacturer” must be a party to the same action or to a separate action “involving the same patent or patents” related to “the same covered product or covered process.” In other words, so long as the manufacturer is challenging the patentholder, the customer is off the hook.

The two main problems here are that (1) the definition of “covered customer” in both bills is exceedingly broad, such that almost any party can claim to be a “customer,” and (2) the provisions leave the courts no discretion in deciding whether to grant a stay, forcing them to halt proceedings even when it’s not warranted.

Both bills define “covered customer” as “a retailer or end user that is accused of infringing a patent or patents in dispute.” “Retailer,” in turn, is defined as “an entity that generates” its “revenues predominantly through the sale to the public of consumer goods and services,” and it explicitly excludes “an entity that manufactures” a “covered product or covered process” or “a relevant part thereof.” Thus, a “retailer” is a “customer,” but a “manufacturer” is not.

This language is far broader than necessary to achieve the stated purpose of protecting downstream retailers and end users. The Senate’s section-by-section breakdown of the PATENT Act claims that the “customer stay is available only to those at the end of the supply chain.” But the actual definitions in both bills are so broad that almost any entity in the supply chain would be eligible for a mandatory stay. This is so because almost all manufacturers are also retailers of other manufacturers; that is, almost all manufacturers could claim to be a “customer.”

Take, for example, a smartphone company that sources its components from a third-party manufacturer. If the smartphone company were sued for patent infringement over a component, it could claim to be a “covered customer” under both bills. Many smartphone companies generate “revenues predominantly through the sale to the public of consumer goods and services,” and they would not be considered “an entity that manufactures” the component. As a “retailer,” the smartphone company would be entitled to a mandatory stay, even though it’s nothing like the mom-and-pop coffee shop the customer stay provisions are designed to help. A district court would be forced to grant the stay, even if doing so hampered a legitimate patentholder’s ability to enforce its property right.

Against this backdrop, it’s important to keep in mind that the decision to stay proceedings has historically been left to the discretion of judges. Sometimes there are indeed good reasons to grant a stay, but each case is unique, and courts frequently weigh many factors in deciding whether a stay is appropriate. Instead of recognizing this dynamic, the Innovation Act and the PATENT Act mandate a one-size-fits-all solution to an issue that is best determined on a case-by-case basis. In effect, the bills tie the hands of district court judges, forcing them to stay suits even when the equities dictate otherwise.

While in some cases a manufacturer may be the more appropriate party to litigate a patent suit, it is not always true that efficiency or justice dictates staying a suit against a customer in favor of litigation involving the manufacturer. Courts generally balance several factors, such as convenience, availability of witnesses, jurisdiction over other parties, and the possibility of consolidation, when deciding whether to grant a stay. Courts consider whether the stay will lead to undue prejudice or tactical disadvantage, and they examine whether it will simplify the issues and streamline the trial. The decision to stay involves an extensive cost-benefit analysis for both the court itself and the litigants.

The Supreme Court has often emphasized the importance of judicial discretion in deciding whether a stay is warranted. As Justice Cardozo wrote for the Court in 1936, the decision to stay “calls for the exercise of judgment, which must weigh competing interests and maintain an even balance.” Justice Cardozo warned that the judiciary “must be on our guard against depriving the processes of justice of their suppleness of adaptation to varying conditions.” In the patent law context, Justice Frankfurter, writing for the Court in 1952, declared: “Necessarily, an ample degree of discretion, appropriate for disciplined and experienced judges, must be left to the lower courts.”

The problem with the House and the Senate bills is that they take away this important “exercise of judgment” and threaten to remove much-needed flexibility and adaptation from the litigation process. The customer stay provisions take the “ample degree of discretion,” which is “appropriate for disciplined and experienced judges,” and place it into the hands of the alleged infringers. Infringers are not likely to be motivated by important notions of efficiency or justice; they’re likely to be motivated by self-interested gamesmanship of the system to their own advantage.

The proponents of the customer stay provisions claim that they’re necessary to help the little guy, but the provisions in both bills just aren’t drafted like that. Instead, they’re drafted to tie the hands of judges in countless cases that have nothing to do with small-time retailers and end users. The courts already have the power to stay proceedings when the equities tip in that direction, but these bills disrupt the judicial discretion on which the patent system has long depended. Customer stays certainly have their place, and that place is in the hands of judges who can take into account the totality of the circumstances. Judges should not be forced to make the important decision of whether to grant a stay based on overbroad and inflexible statutory language that goes far beyond its stated purpose.

Categories
Commercialization High Tech Industry Innovation Intellectual Property Theory Internet Inventors Law and Economics Patent Law Patent Licensing Patent Theory Software Patent Uncategorized

The Commercial Value of Software Patents in the High-Tech Industry

In CPIP’s newest policy brief, Professor Saurabh Vishnubhakat examines the important role patents play in commercializing software innovation and supporting technology markets. He explains how a proper understanding of this commercial role requires a broader view of patents in software innovation than the all-too-common focus on a small handful of litigated patents and legal questions of patentability and patent quality. He concludes that the flexibility and efficiency of the patent system fosters the emergence of new markets for the exchange of technology and knowledge.

Read the full brief: The Commercial Value of Software Patents in the High-Tech Industry

Categories
Commercialization Copyright Copyright Licensing Copyright Theory History of Intellectual Property Innovation Intellectual Property Theory Internet Law and Economics Uncategorized

Copyright’s Republic: Promoting an Independent and Professional Class of Creators and Creative Businesses

By Mark Schultz and Devlin Hartline

The following essay is the first in a series of CPIP essays celebrating the 225th anniversary of the Copyright Act by recognizing the rich purposes, benefits, and contributions of copyright. This series of essays will be published together in a forthcoming collection entitled “Copyright’s Republic: Copyright for the Last and the Next 225 Years.”

The current academic and policy discussion of copyright focuses on balancing the gross economic benefits and harms of copyright. A more complete understanding of copyright can account for both the needs and rights of individuals and the public good. Copyright is important because it helps creators make an independent living and allows them to pursue and perfect their craft. In short, it enables a professional class of creators.

The creative industries benefit from this independence too. They must find a market, but they are not beholden to anybody but their customers and shareholders in choosing what creative works to promote. This enables a richly diverse cultural landscape, with movie studios, television channels, record labels, radio stations, and publishers specializing in vastly different types of material.

To understand the importance of a professional class of creators, it’s helpful to understand the paradoxical role of money in creativity. While some are quick to say, “It’s not about the money,” in some essential ways, it really is about the money. Certainly, for some creators, the proposition is straightforward. As the eighteenth-century poet Samuel Johnson famously and cynically proclaimed: “No man but a blockhead ever wrote, except for money.” For countless others, however, creative endeavors hardly bring riches. And even commercial creators frequently leave money on the table rather than do something they find distasteful. Nevertheless, money is important.

This seeming paradox can be resolved by considering the role of money overall in creative work. We can take creators at their word: There are many nonmonetary factors that influence and incentivize creativity, such as love, independence, curiosity, and passion. In fact, thinking about the money can hurt the creative process. But while creators may not “do it for the money,” the money is what makes it possible for them to spend their time honing skills and creating high-quality works. The money endows a professional class of creators and the various creative industries and channel partners that support them. This vibrant ecosystem – empowered by copyright – generates a rich diversity of cultural works.

Creative individuals, like every other human being, need to eat, and, like most of us, they need to work to eat. The real question is, what kind of work are they able to do? Some notable creators have worked in their spare time, but many of the greats thrive most when they can merge their avocation with their vocation. They get better at creating when their work is creation.

There is, of course, more than one way to fund professional creation – patronage, tenured university teaching, and commercial markets founded on copyright are notable ways to do it. One of the virtues of a commercial property rights system is that it fosters creative independence.

The independence afforded by a commercial system based on property rights is highlighted by contrasting it with the greater constraints under other systems. Before the first modern copyright statute passed nearly three centuries ago, many creators depended heavily on the patronage system. Wealthy patrons funded creative efforts by either commissioning works directly or employing creators to staff positions where they were given time to develop new works. To be sure, many great works were produced under this system – the musical compositions of Johann Sebastian Bach and Joseph Haydn stand testimony to this fact.

However, the economic benefits of patronage often came at the expense of the personal autonomy and integrity of these creators. As the old adage goes, “he who pays the piper calls the tune.” Sometimes these constraints were quite direct. When Johann Sebastian Bach attempted to leave the service of one of his patrons to go work for another, the former patron refused to accept his resignation and briefly had him arrested.

More important, patrons had tremendous say in the work of composers. They could decide what and when the composers wrote. They might not appreciate the value of the works created for them. For example, Bach’s Brandenburg Concertos are now recognized as works of genius. Unfortunately, the noble to whom they were dedicated, Christian Ludwig, the Margrave of Brandenburg, was apparently indifferent. The score sat on his shelf, unperformed and unappreciated, for decades. The concertos were not published until nearly 150 years later, after being rediscovered in an archive.

For these reasons, many composers dreamed of financial independence. For example, the composer Joseph Haydn once celebrated leaving behind the patronage of the Esterhazys, which was rather secure and relatively undemanding. Haydn moved to London, where he became the eighteenth-century equivalent of a successful rock star – in demand for his services and making lots of money. London had a private market – not yet so much supported by copyright and publishing as by private commissions and paid performances. In any event, Haydn prospered. In fact, at one point he wrote letters urging his friend Mozart to join him in London as soon as possible, unabashedly rhapsodizing over the money to be made there.

Still, he was now on his own, earning his own pay rather than being kept by a patron. For Haydn, artistic independence trumped economic security:

How sweet this bit of freedom really is! I had a kind Prince, but sometimes I was forced to be dependent on base souls. I often sighed for release, and now I have it in some measure. I appreciate the good sides of all this, too, though my mind is burdened with far more work. The realization that I am no bondservant makes ample amend for all my toils.

Haydn, Letter to Maria Anna von Genzinger, September 17, 1791

The modern copyright system, beginning with the English Statute of Anne in the early eighteenth century, freed creators from the restrictive patronage system. Like patronage, copyright offered creators the financial support they needed so that they could devote themselves to their craft. Unlike patronage, however, it gave them much-needed personal autonomy and artistic independence.

Beethoven, a young contemporary and student of Haydn working at the end of the patronage era, was able to support himself. His facility at performing his own difficult work helped him make a living. But he also used and supported copyright. He would often publish his works first in England to ensure that they received copyright there. He also lobbied the German states for a copyright law.

For Beethoven, too, money was important for the artistic independence it provided:

I do not aim at being a musical usurer, as you think, who composes only in order to get rich, by no means, but I love a life of independence and cannot achieve this without a little fortune, and then the honorarium must, like everything else that he undertakes, bring some honor to the artist.

Ludwig van Beethoven, Letter to publisher, August 21, 1810

The era of patronage was long ago, but human nature has not changed in the decades and centuries since. Creators still face the dilemma of trying to support themselves while maintaining independence. Every economic arrangement imposes some constraints, but some impose more than others.

A good example of how modern copyright enables individual creators to enjoy independence while supporting themselves is provided by the career of photographer Michael Stern. Stern is a hard-working creative entrepreneur – one 30-minute video he made required 103,937 photographs and 900 hours to produce. Stern doesn’t depend on subsidies or grants; rather, he values the independence he gets from being self-employed. He explains:

“The real benefit of being a self-employed photographer,” he says, “is that I can move through life on my terms and do what I want in the way I want to do it. That freedom drives me.” But, it’s not for everybody, he warns. “Nobody loves you like your mother, and even sometimes not even her. So ya gotta do it for yourself. If you don’t, you won’t have the drive needed to reach your goals.”

Instead of creating works that conform to the limited demands of their patrons, creators supply their works to the marketplace, where the demands of consumers are far more diverse. This proves beneficial to creators and society alike. Creators from all walks of life and with all sorts of interests can find the market that will support them, and this fosters a rich cultural landscape encompassing multiple political and social views.

Copyright fulfills its constitutional purpose of promoting progress by incentivizing creators through the grant of marketable rights to their works, but these rights do more than simply lure creators with the hope of economic benefits. Just as crucially, these rights endow creators with substantial personal autonomy while respecting their individuality and dignity. This fosters a creative environment conducive to the creation of high-quality works with enduring social value.

Copyright is a market-based system that supports a professional class of creators who rely on the value of their rights in order to make a living. These marketable rights have also given rise to entire creative industries that lend critical support to professional creators, and through the division of labor these industries enable professional creators to accomplish great feats that would be impossible if they worked alone.

The numbers testify to copyright’s success in helping to create a professional class of creators in the United States. As a recent report on the creative industries enabled by copyright found, there are 2.9 million people employed by over 700,000 businesses in the United States involved in the creation or distribution of the arts. They accounted for 3.9 percent of all businesses and 1.9 percent of all employees.

This creative ecosystem enables professional creators to produce the sorts of high-quality works that society values most. The popularity of these works in the marketplace makes them commercially valuable, and this in turn compensates professional creators and the creative industries that support them for creating the works that society finds so valuable.

This virtuous circle benefits creators and the public alike – just as the Framers had envisioned it. Copyright is not only doing its job, it is doing it well. The number of works available in the market is incredible – certainly more than anyone could ever possibly consume. And the diversity of voices able to connect with audiences in the marketplace makes our cultural lives all the more fulfilling.

Categories
Commercialization Copyright Copyright Licensing Copyright Theory History of Intellectual Property Innovation Intellectual Property Theory Internet Law and Economics Uncategorized

Copyright’s Republic: Copyright for the Last and the Next 225 Years

By Mark Schultz and Devlin Hartline

This past Sunday marked the 225th anniversary of the first U.S. Copyright Act. As we move well into the twenty-first century, a claim that copyright no longer “works” in the “digital age” has become commonplace – so commonplace, in fact, that it’s arguably the dominant cliché in modern copyright discussions. Like many clichés, it contains a tiny grain of truth wrapped in a huge ball of glib, unhelpful, and even harmful generalizations.

Before one can understand what the future of copyright and the creative industries could and should look like, one should first appreciate what the first 225 years of copyright has given to the United States. Copyright laid the foundation for, and continues to support, the largest, most enduring, and most influential commercial culture in human history. That commercial culture is uniquely democratic, progressive, and accessible to both creators and audiences.

Could the Copyright Act profitably be revised? In theory, perhaps, and thus there is a grain of truth in the clichés about modernizing copyright. The 1976 Copyright Act and many of its subsequent amendments are overburdened with detailed regulatory provisions contingent on outdated assumptions about technology and business. They also sometimes embody political compromises that reflect circumstances that have long since passed. However, we should pause before hastening to replace yesterday’s contingencies with those of today. And we should also pause – indefinitely – before overturning the entire enterprise on the grandiose assumption that the Internet has changed everything.

Before we can understand what the future of the creative industries could and should look like, we need to appreciate what we have achieved and how we achieved it. The American creative industries are everything the Founding generation that drafted the 1790 Copyright Act could have dreamed – and so much more. Through its press, news media, and publishing industries, the U.S. has perpetuated the spirit of the Enlightenment’s Republic of Letters, with lively, reasoned, and sustained public discussions and debates about values, science, and politics.

The U.S. has produced a creative industry that enlightens and edifies while also diverting and distracting billions of people with its cultural products. This vast commercial creative marketplace allows professional writers, artists, musicians, actors, filmmakers, game designers, and others to make a living doing something that fulfills them and their audience. The U.S. has achieved much based on the twin foundations of free expression and copyright, securing the right to express oneself freely while securing the fruits of the labors of those who craft expressions.

The past thus has much to teach the future, while inevitably yielding to change and progress. Copyright should continue to secure the many values it supports, while being flexible enough to support innovation in creativity and business models.

On this occasion of the 225th anniversary of the first U.S. Copyright Act, the Center for the Protection of Intellectual Property (CPIP) is recognizing the essential contribution of copyright and commercial culture to the United States. To that end, CPIP will be publishing a series of essays highlighting the fact that, contrary to the facile narratives about copyright that dominate modern discussions, copyright isn’t simply a law designed to incentivize the creation of more creative stuff. It has much richer purposes and benefits. Copyright:

  • Supports a professional class of creators.
  • Enables a commercial culture that contributes to human flourishing.
  • Serves as a platform for innovation in both the arts and sciences.
  • Promotes a free republic.

U.S. copyright law has achieved these lofty goals for the last 225 years, and it will continue to do so—but only if we let it and help it do so. In many important ways, U.S. culture and politics has been so shaped by the commercial culture created by copyright that it rightly can be called Copyright’s Republic.

Part I: Copyright Promotes an Independent and Professional Class of Creators and Creative Businesses