Categories
Administrative Agency Biotech High Tech Industry Innovation Intellectual Property Theory Inventors Legislation Patent Law Patent Litigation Patent Theory Software Patent Statistics Supreme Court Uncategorized

The One Year Anniversary: The Aftermath of #AliceStorm

The following post, by Robert R. Sachs, first appeared on the Bilski Blog, and it is reposted here with permission.

It’s been one year since the Supreme Court’s decision in Alice Corp. v. CLS Bank. On its face the opinion was relatively conservative, cautioning courts to “tread carefully” before invalidating patents, and emphasizing that the primary concern was to avoid preemption of “fundamental building blocks” of human ingenuity. The Court specifically avoided any suggestion that software or business methods were presumptively invalid. But those concerns seem to have gone unheeded. The Court’s attempt to side step the tricky problem of defining the boundary of an exception to patent eligibility—”we need not labor to delimit the precise contours of the ‘abstract ideas category in this case'”—has turned into the very mechanism that is quickly “swallow[ing] all of patent law.” The federal courts, the Patent Trial and Appeal Board, and the USPTO are using the very lack of a definition to liberally expand the contours of abstract ideas to cover everything from computer animation to database architecture to digital photograph management and even to safety systems for automobiles.

Let’s look at the numbers to present an accurate picture of the implications of the Supreme Court’s decision. My analysis is a data-driven attempt to assess the implications of Alice one year out. It is with an understanding of how the Supreme Court’s decision is actually playing out in the theater of innovation that we can better project and position ourselves for what the future holds.

Alice at Court

Table 0 Fed Courts

As of June 19, 2015 there have been 106 Federal Circuit and district court decisions on § 101 grounds, with 76 decisions invalidating the patents at issue in whole or in part. In terms of patents and claims, 65% of challenged patents have been found invalid, along with 76.2% of the challenged claims.

The success rate of motions on the pleadings (including motions to dismiss and judgments on the pleadings) is extremely impressive: 67% of defense motions granted, invalidating 54% of asserted patents. There has never been a Supreme Court ruling that the presumption of validity does not apply to § 101—only the Court’s use of the originally metaphorical notion that eligibility is a “threshold” condition. Given that, and the general rule that to survive a motion to dismiss the patentee (historically) need only show that there was a plausible basis that the complaint states a cause of action— there is a plausible basis that the patent claim is not directed to an abstract idea, law of nature, or natural phenomena. One would be forgiven for thinking, as did former Chief Judge Rader in Ultramercial, LLC v. Hulu, LLC that a “Rule 12(b)(6) dismissal for lack of eligible subject matter will be the exception, not the rule.” Apparently the rules change in the middle of the game.

Turning specifically to the Federal Circuit, the numbers are stark:

Table 00Fed Circuit

Of the 13 decisions, 11 are in software or e-commerce and only two are in biotech. The one case where the court held in favor of the patentee, DDR Holdings, LLC v. Hotels.com, L.P. appeared to offer a narrow avenue for patentees to avoid invalidation. However, only nine district court opinions have relied upon DDR to find patent eligibility, with over 30 court opinions distinguishing DDR as inapplicable. Even more interesting is the fact that in DDR the Federal Circuit essentially held that creating a website that copies the look and feel of another website is patent eligible. In the Silicon Valley, that’s called phishing, and it’s not a technology in which most reputable companies invest.

Alice at the Office

The impact of Alice is similarly impacting practitioners before the USPTO. In December, 2014 the Office issued its Interim Guidance on Patent Subject Matter Eligibility, providing guidance to patent examiners as to how to apply the Alice, Mayo, and Myriad decisions along with various Federal Circuit decisions, to claims during prosecution. Importantly, the Guidance noted that “the Supreme Court did not create a per se excluded category of subject matter, such as software or business methods, nor did it impose any special requirements for eligibility of software or business methods,” and it reminded examiners that “Courts tread carefully in scrutinizing such claims because at some level all inventions embody, use, reflect, rest upon, or apply a law of nature, natural phenomenon, or abstract idea.” Alas, most patent examiners are acting as if the patent applications before them are the exceptions to these cautionary instructions.

With the assistance of Patent Advisor, I compiled a dataset of almost 300,000 office actions and notice of allowances sampled in two week periods during 2013, 2013, 2014 and early 2015, and all actions during March, April and May 2015, across all technology centers:

Table0 Number of Apps

About 100,000 actions were notices of allowances, leaving about 200,000 office actions. Each office action was coded as to whether it included rejections under §§ 101, 102 and 103. For each office action the art unit and examiner was identified as well, and the status of the application (abandoned, pending or patented) as of the date that the data was obtained. I then analyzed the data for office actions rejections based on § 101, allowance rates, and examiner rejection rates. Here’s what I found.

Percent of all Actions with § 101 Rejections

Table2

Here, we have the percentage of all actions in each period that received a § 101 rejection, considering both rejections issued and notices of allowances. The black line separates pre-Alice from post-Alice data. For example, in TC 1600, the biotech area, in January, 2012 6.81% of all actions issued (counting both office actions and notices of allowances) were office actions with § 101 rejections; by May 2015 that percentage almost doubled to 11.86% of actions.

Overall, data shows that in 2012 subject matter rejections were mainly in the computer related Tech Centers (2100, 2400) and began declining thereafter, while escalating in biotechnology (1600) and so-called “business methods” Tech Center, TC 3600, following Mayo and Alice. Other technology centers such as semiconductors and mechanical engineering had essentially low and constant rejection rates. But that’s not because there are no software patents in these technology centers: you find plenty of software patents in these groups. Rather, my view is that it is because examiners in these groups treat software patents as they do any other technology.

The rejection rates in Tech Center 3600 in the 30-40% range are higher than any other group, but they also mask what’s really going on, since TC 3600 covers more than business methods. Tech Center 3600 has nine work groups:

Percent of all Actions with § 101 Rejections in TC 3600 Work Groups

Table3 Ecomm Rej

In TC 3600 most of the work groups handle good old-fashioned machines and processes, such as transportation (3610), structures like chairs and ladders (3630), airplanes, agriculture, and weapons (3640), wells and earth moving equipment (3670), etc. Three work groups handle e-commerce applications: specifically, 3620, 3680 and 3690. Here we see that these groups have significantly higher § 101 rejections than the rest of TC 3600. But let’s drill down further.

Each of work groups 3620, 3680 and 3690 have between five and 10 individual art units that handle specific types of e-commerce technologies, but they are not all under the same work group. For example business related cryptography is handed by both art units 3621 and 3685; healthcare and insurance is handled by art units 3626 and 3686; operations research is handled in 3623, 3624, 3682 and 3684. If we consolidate the data according to technology type and then look at rates of § 101 rejections we get the following:

Percent of all Actions with § 101 Rejections in E-Commerce Art Units by Technology Type

Table3 Ecomm Rej

What’s going on? After Bilski in 2010, the § 101 rejections were running between 17% and 50%. Not great but tolerable since these were mostly formal and were overcome with amendments adding hardware elements (“processor,” “memory”) to method claims or inserting “non-transitory” into Beauregard claims.

But after Alice, everything changed and § 101 rejections started issuing like paper money in a hyperinflation economy. If your perception as a patent prosecutor was that the every application was getting rejected under § 101, this explains your pain. Here’s another view of this data, in terms of actual number of § 101 rejections per sample period:

Number of Office Actions with § 101 Rejections in E-Commerce Art Units by Technology Type

Table4 Ecomm Rej Nos

Notice here that the number of office actions in March, 2015 fell dramatically, and then in April the flood gates opened and hundreds of actions issued with § 101 rejections. This is consistent with the Office’s statements in January 2015 that it was training examiners in view of the 2014 Interim Guidance, so office actions were being held until the training was completed. Apparently, the training skipped the part about no per se exclusions of business methods.

Now let’s consider notice of allowance rates. First with respect to all Tech Centers.

Percent of Actions that Are Notices of Allowance

Table5 All TCs NOA

This data reflects, of all the actions that were issued in a given period, the percentage that were notices of allowances. (Note here that contrary to the preceding tables, red cells are low percentage, and green cells are high since notices of allowance are good things, not bad things). The numbers look good, with a general increasing trend over time.

Now consider what’s happening in TC 3600’s business methods art units.

Percent of Actions that Are Notices of Allowance in Business Methods

Table6 NOAs in Ecomm

Now the picture is quite different. The rate of NOAs drops dramatically after Alice, especially in finance and banking and operations research. If it seemed that you were no longer getting a NOAs, this is why. The zero percent rate in March, 2015 is a result of the Office holding up actions and NOAs in view of the Interim Guidance training, as mentioned above.

Patents issued in the business methods art units typically are classified in Class 705 for “Data Processing.” I identified all patents with a primary classification in Class 705 since January, 2011, on a month by month basis, to identify year over year trends. Again the black line separates pre-Alice from post-Alice data.

Table7 Class 705 Patents

This table shows a precipitous decline in the number of business method patents issued following Alice, especially year over year. The lag between the June, 2014 Alice decision and the drop off in October 2014 is an artifact of the delay between allowance and issuance, as well as the USPTO’s unprecedented decision to withdraw an unknown number of applications for which the issue fee had already been paid, and issue § 101 rejections. It’s an interesting artifact, as well, that the number of Class 705 patents issued peaked in the month after Alice: you have to remember that these patents were allowed at least three months, and as much as a year, before the Alice decision; it just took a long time to actually get printed as a patent.

Next, we’ll consider abandonment rates, on a comparative basis, looking at the percentages of applications that were ultimately abandoned in relationship to whether or not they received a § 101 rejection. We’ll compare the data from January 2012 to July 2014. Again, consider the entire patent corps:

Percent of Abandoned Applications with Prior § 101 Rejection

Table8 Abandon all TCs

Here we see that of the applications that were abandoned during the respective sample periods, the vast majority did not have a prior § 101 rejection. Only in TC 3600 did the majority shift after Alice with 51.83% applications that received § 101 rejections in July 2014 being subsequently abandoned by May 31, 2015. Again, let’s drill down into the business method art units in TC 3600:

Percent of Abandoned Applications with Prior § 101 Rejection

Table9 Ecomm Abandon

First, prior to Alice, abandonments in the business method units appeared to result more frequently from other than § 101 rejections, typically prior art rejections. This is shown by the fact that the Jan. 2012 “No” column (no prior 101 rejection) is greater than the Jan. 2012 “Yes” column. Then after Alice, there is a huge shift with the vast majority of applications that were abandoned having § 101 rejections, as shown by the July, 2014 “Yes” column. The vast majority of abandonments, upwards of 90%, followed a 101 rejection. That’s applicants essentially giving up over what only a few years ago was a relatively minor hurdle. That’s what happens when you change the rules in the middle of the game. Second, there is also significant differential behavior in the business method areas as compared to the rest of the technology centers after Alice.

Here’s my personal favorite.

Rates of Examiner § 101 Rejections in TC 3600

Table12 Examiner Rates

This table shows the numbers of examiners in the business method art units with respect to the percentage of applications in which they issued § 101 rejections after Alice. The first row shows that during the sampled periods since Alice, 58 business methods examiners issued § 101 rejections in 100% of their applications, for a total of 443 applications. Twenty examiners issued § 101 rejections for between 90% and 99% of their cases, covering 370 applications. In short, 199 examiners issued § 101 rejections more than 70% of the time, covering 3,304 applications or about 70.6% of all applications. This is not “treading carefully.”

We find similar, though less dramatic, trends and variations in TC 1600 which handles biotechnology, pharma, and chemistry.

Percent of all Actions with § 101 Rejections in TC 1600 Work Groups

Table10 1600 101 Rej Rate

The red line separate pre-Mayo/Myriad data from post-Mayo/Myriad, and the increase in the post-period is significant. Here too, the various work groups mask the more significant rejection rates in specific technology areas, with the rejection rate in microbiology first jumping up to 34.6% post-Mayo and steadily climbing to the current 53.2%.

Percent of all Actions with § 101 Rejections in TC 1600 by Technology

Table11 1600 Tech Type Rej

This table breaks down the work groups into technology types, and then these are sorting average rejection rate over the past four months. Following Alice, we see a significant increase in eligibility rejections in bioinformatics related applications–inventions that rely on analysis and identification of biological and genetic information, and which are frequently used in diagnostics and drug discovery. This is especially disconcerting because bioinformatics is critical to the development of new diagnostics, therapies and drugs.

Note as well the enormous spike in rejections for plant related applications from 0% between July 2015 and April 2015, to 50% in May 2015. This is likely a result again of the USPTO’s Interim Guidance which essentially instructed examiners to reject any claim that included any form of a natural product.

At least pesticides and herbicides are safe from Alice, since we definitely need more of those. The irony is that the more pesticides and herbicides that come to market, the more we need bioinformatics inventions to identify and treat conditions potentially resulting from these products.

Alice at the Board

The Patent Trial and Appeal Board has been even more hostile to software and business methods patents under the Covered Business Method review program:

Total Petitions

Petitions Granted

Percent Invalid

PTAB CBM Institution on § 101

72

64

89%

PTAB Final Decisions on § 101

27

27

100%

Covered Business Method review is available for patents that claim “a method, apparatus, or operation used in the practice, administration, or management of a financial product or service.” The Board takes a very broad view of what constitutes a financial product or service: if the patent specification happens to mention that the invention may be used in a financial context such as banking, finance, shopping or the like, then that’s sufficient. The Board has found CBM standing in 91% of petitions, and instituted trial in 89% of petitions asserting § 101 invalidity. Once a CBM trial has been instituted, the odds are heavily in the petitioner’s favor: of the 27 final CBM decisions addressing § 101, the Board has found for the petitioner 100% of the time.

Finally, we look at the Board’s activity in handling ex parte appeals from § 101 rejections for the period of March 1, 2015 to May 30, 2015:

  • 32 Ex Parte Decisions on § 101, with 15 in TC 3600.
  • 28 Affirmances overall, 13 in TC 3600
  • Two Reversals on § 101, both in TC 3600
  • Four New Grounds of Rejection for § 101

Following suit with how the Board is handling CBMs, they are also heavily supporting examiners in affirming § 101 rejections. More disconcerting is the trend of new grounds of rejection under § 101. While only four were issued in this period, there have been several dozen since Alice. In this situation, the applicant has appealed, for example, a § 103 rejection. The Board can reverse the examiner on that rejection, but then sua sponte reject all of the claims under § 101. What are the odds that the examiner will ever allow the case? Close to zero. What are the odds that an appeal back to the Board on the examiner’s next § 101 rejection will be reversed? If the Board’s 100% rate of affirming its CBM institution decisions on § 101 is any indication, then you know the answer.

Conclusions

Looking at the overall context of the Alice decision, it’s my view that Supreme Court did not intend this landslide effect. While they were certainly aware of the concerns over patent trolls and bad patents, they framed their decision not as a broadside against these perceived evils, but as simple extension of Bilski and the question of whether computer implementation of an abstract idea imparts eligibility. At oral argument, the members of the Court specifically asked if they needed to rule on the eligibility of software and they were told by CLS and the Solicitor General that they did not. To the extent that there is broad language in that opinion, it is the cautionary instructions to the courts to avoid disemboweling the patent law from the inside, and the emphasis on preemption of fundamental ideas—not just any ideas—as the core concern of the exclusionary rule. The evidence above shows that these guideposts have been rushed past quite quickly on the way to some goal other than the preservation of intellectual property rights.

If the present trends hold, and I see no reason to suggest that they will not, we will continue to see the zone of patent eligibility curtailed in software (not to mention bio-technology after Mayo and Myriad). Indeed, the more advanced the software technology—the more it takes over the cognitive work once done exclusively by humans, the more seamless it becomes in the fabric of our daily lives—the less patent eligible it is deemed to be by the courts and the USPTO. What technologies will not be funded, what discoveries will not be made, what products will never come to market we do not know. What we do know is this: there is only one law that governs human affairs and that is the law of unintended consequences.

Categories
Biotech Gene Patents Innovation Intellectual Property Theory Inventors Patent Law Patent Litigation Patent Theory Patentability Requirements Supreme Court Uncategorized

Federal Circuit Threatens Innovation: Dissecting the Ariosa v. Sequenom Opinion

By Patent Publius

Earlier this month, the Federal Circuit issued its opinion in Ariosa v. Sequenom, a closely-watched biotechnology case with significant repercussions for patent-eligibility analysis generally. Unfortunately, the Federal Circuit misapplies the Supreme Court’s analytical framework from Mayo v. Prometheus, striking down Sequenom’s important innovation for the prenatal diagnosis of fetal abnormalities. The shame here is that the Mayo opinion itself was unnecessarily broad, and the Federal Circuit has now interpreted it to be even broader.

Section 101 of the Patent Act provides that “[w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter . . . may obtain a patent therefor,” but there are judicial exceptions for “laws of nature, natural phenomenon, and abstract ideas.” Those exceptions are relevant here, where the Federal Circuit considers whether the claimed method of using cell-free fetal DNA (“cffDNA”) to make diagnoses is patentable subject matter.

In the Mayo opinion, the Supreme Court established a two-step analysis for determining whether method claims merely “set forth laws of nature” or instead apply those natural laws with “additional features” so as to become patent-eligible processes. The first step looks at whether the claims are directed to a patent-ineligible law of nature, and the second step looks at whether additional elements “transform the nature of the claim” into something that amounts to more than a claim on the law of nature itself.

Applying Mayo to the case at hand, the Federal Circuit’s analysis of the first step is perfunctory:

In this case, the asserted claims of the ‘540 patent are directed to a multistep method that starts with cffDNA taken from a sample of maternal plasma or serum—a naturally occurring non-cellular fetal DNA that circulates freely in the blood stream of a pregnant woman. . . . It is undisputed that the existence of cffDNA in maternal blood is a natural phenomenon. . . . The method ends with paternally inherited cffDNA, which is also a natural phenomenon. The method therefore begins and ends with a natural phenomenon. Thus, the claims are directed to matter that is naturally occurring.

The Federal Circuit’s conclusion that the method “begins and ends with a natural phenomenon” tells us very little of how this principle is to be applied generally. Certainly, the method begins with a biological sample of maternal plasma or serum that contains paternally-inherited cffDNA, and it makes sense to say that it begins with a natural phenomenon. Of course, everything begins with a natural phenomenon, so this is hardly instructive.

But it’s inaccurate to say that the method simply ends with cffDNA. The method itself takes the miniscule amount of cffDNA found in the sample and exponentially amplifies it to detectable levels. The resulting substance, unlike the beginning sample, gains significant and new utility from a diagnostic perspective. What comes out of the process is an artificially-enriched substance that, unlike the maternal plasma or serum fed into the process, can be used for many diagnostic purposes. That is, the method ends with a substance that is anything but a natural phenomenon.

Applying the second step of the Mayo framework, the Federal Circuit finds that Sequenom’s claimed methods are not significantly transformative:

Like the patentee in Mayo, Sequenom contends that the claimed methods are patent eligible applications of a natural phenomenon, specifically a method for detecting paternally inherited cffDNA. Using methods like PCR to amplify and detect cffDNA was well-understood, routine, and conventional activity in 1997. The method at issue here amounts to a general instruction to doctors to apply routine, conventional techniques when seeking to detect cffDNA. Because the method steps were well-understood, conventional and routine, the method of detecting paternally inherited cffDNA is not new and useful. The only subject matter new and useful as of the date of the application was the discovery of the presence of cffDNA in maternal plasma or serum.

The last sentence is the most perplexing: The “discovery of the presence of cffDNA in maternal plasma or serum” is what sets Sequenom’s method apart from that which was “well-understood, routine, and conventional activity in 1997.” The problem here stems from the Federal Circuit’s failure to consider the claimed method as a whole, as it purportedly sets out to do: “[W]e next consider the elements of each claim both individually and ‘as an ordered combination’ to determine whether additional elements ‘transform the nature of the claim’ into a patent-eligible application.”

Undoubtedly, some parts of Sequenom’s method were already well-known. No one denies, for example, that some of the techniques involved in amplifying and then detecting cffDNA were, in their general features, already conventional activity in the field (e.g., PCR). What makes the Sequenom method patentable is the sum of its parts, that is, the method as a whole that the Federal Circuit acknowledges to contain the new and useful discovery of cffDNA in the maternal plasma or serum.

This is the principal feature of Sequenom’s claimed invention and its central argument throughout the litigation. Yet, the Federal Circuit relegates it to one of “Sequenom’s remaining arguments” and addresses it in a brief paragraph near the end of the opinion, where it inexplicably claims: “This argument implies that the inventive concept lies in the discovery of cffDNA in plasma or serum. Even if so, this is not the invention claimed by the ’540 patent.” On the contrary, this discovery is anything but conventional, and the method as a whole transforms a natural phenomenon into something both artificial and patentable.

Overbroad (and Dangerous) Principles

The overbreadth of the Federal Circuit’s analysis threatens diagnostic methods across the board. If a method of detecting a natural phenomenon is always “directed to” that natural phenomenon, as the Federal Circuit suggests, then all such methods are prima facie patent-ineligible under the first step of the Mayo framework and must fight the uphill battle under its second step. This is particularly troubling since virtually all diagnostic tests detect natural phenomena. Moreover, the Federal Circuit’s application of the second step of the Mayo framework looks at each part of the method individually, ignoring the claimed method as a whole.

Not only is this principle breathtakingly broad in the damage it could cause to the diagnostics industry, it is neither required by, nor even consistent with, the controlling case law. Only claims to natural phenomena are per se patent-ineligible; however, applications of natural phenomena are generally patentable. Detecting a natural phenomenon is not the same thing as the phenomenon itself. It is instead a specific application of that phenomenon. While the Federal Circuit states that applications of natural phenomena are patent-eligible, it quickly proceeds to categorically suggest a principle under which all diagnostic inventions may have one foot in the Section 101 grave.

Another overly-broad principle from the Federal Circuit opinion comes from this statement: “For process claims that encompass natural phenomenon, the process steps are the additional features that must be new and useful.” This may at first seem obvious and uncontroversial, but in the context of the rest of the opinion, it proves quite problematic. The Federal Circuit cites Parker v. Flook as support: “The process itself, not merely the mathematical algorithm, must be new and useful.” But note the subtle distinction between the two quotes. The Supreme Court discussed the “process itself,” while the Federal Circuit discusses the “process steps.”

This distinction has two important effects. First, it is one of many signals in the opinion that demonstrates the Federal Circuit’s improper dissection of the claimed method into its components parts. Rather than consider whether the “process itself” is “new and useful,” as the Flook opinion had done, the Federal Circuit analyzes each step individually. There’s no consideration of how the steps integrate into the process as a whole, and there’s no mention of whether that entire process claims something other than the natural phenomenon itself.

Second, the Federal Circuit looks at each step in a very general way and ignores the details of the steps that confer patent eligibility. For example, the opinion spends much time discussing how routine the PCR method was at the time of filing. But Sequenom never claimed the PCR method itself. The Federal Circuit fails to address Sequenom’s central argument: The claimed method is a new process of detecting cffDNA by devising a novel sample source from which to extract it, namely, maternal plasma or serum. The application and adaptation of known techniques in this inventive way to a newly-discovered sample source is not conventional.

Finally, the most problematic and new principle that may emerge from this opinion is a subtle, yet very significant, extension of Mayo to invalidate claims directed to routine and conventional applications of natural laws. Mayo teaches that the mere addition of what is purely routine and conventional at the time of filing cannot save a claim directed to a law of nature: “In particular, the steps in the claimed processes (apart from the natural laws themselves) involve well-understood, routine, conventional activity previously engaged in by researchers in the field.”

The Federal Circuit appears to exclude from the patent system a routine application of a law of nature, rather than, as Mayo requires, a law of nature to which merely routine activities have been appended. That is, if one skilled in the art could, after being informed of a newly-discovered law of nature, use routine skill to arrive at the claimed invention, then that claimed invention may be invalidated under the Federal Circuit’s reasoning.

This is contrary to Mayo, and it could conceivably invalidate huge swaths of meritorious inventions. Once the principles underlying a new method are known, application of those principles to devise that method will very often be obvious. The Supreme Court has been very consistent in saying that applications of laws of nature are patent-eligible, including those applications that would have been obvious in view of newly-discovered laws of nature. It is a subtle, but important, point to recognize that Mayo did not say the opposite, as the Federal Circuit now interprets it.

The Preemption Question

One potential bright spot in the Federal Circuit’s opinion is its treatment of preemption. Instead of being a test for patent eligibility, preemption is properly understood as being solely a policy underlying eligibility exclusions. It can at most serve as an after-the-fact check on whether an already-reached conclusion of eligibility is consistent with this policy. The Federal Circuit here mostly validates this position:

The Supreme Court has made clear that the principle of preemption is the basis for the judicial exceptions to patentability. Alice, 134 S. Ct at 2354 (“We have described the concern that drives this exclusionary principal as one of pre-emption”). For this reason, questions on preemption are inherent in and resolved by the § 101 analysis. . . . Where a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot.

This may ultimately be a hollow victory, however. The Federal Circuit also says: “While preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility.” The problem here is that it is impossible to ever show complete preemption because it is impossible to know at the time of filing whether something outside the claims could also be conceived. Inventions are, by definition, unforeseeable.

Moreover, allowing anything less than complete preemption to be sufficient to invalidate a claim threatens to invalidate far too much subject matter. By their very nature, patents are preemptive. Allowing courts and patent examiners to freely draw the line between allowable and prohibited levels of preemption invites unpredictable and arbitrary decisions based on personal value judgments. That very problem arose here, where the district court held the claims invalid, at least in part, because they covered what the judge deemed to be “the only commercially viable way of detecting” the embodiment of the law of nature.

The Promising Potential in Judge Linn’s Concurrence

Judge Linn’s concurrence is promising, but it falls short of its full potential. Judge Linn does a better job than the majority in recognizing and understanding the legal significance of the important facts of this case:

[N]o one was amplifying and detecting paternally-inherited cffDNA using the plasma or serum of pregnant mothers. Indeed, the maternal plasma used to be “routinely discarded,” . . . because, as Dr. Evans testified, “nobody thought that fetal cell-free DNA would be present.”

It is encouraging to see that a Federal Circuit judge has finally gone on record to point out the problems caused by ever-broadening applications of Mayo:

I join the court’s opinion invalidating the claims of the ‘540 patent only because I am bound by the sweeping language of the test set out in Mayo Collaborative Services v. Prometheus Laboratories, Inc. . . . In my view, the breadth of the second part of the test was unnecessary to the decision reached in Mayo. This case represents the consequence—perhaps unintended—of that broad language in excluding a meritorious invention from the patent protection it deserves and should have been entitled to retain.

Judge Linn errs, however, in his acquiescence that Mayo requires the majority’s conclusion. Judge Linn’s concurrence generally reads more like a dissent, but he undercuts his own criticism of Mayo and its effects by calling his opinion a “concurrence.” As he laments:

The Supreme Court’s blanket dismissal of conventional post-solution steps leaves no room to distinguish Mayo from this case, even though here no one was amplifying and detecting paternally-inherited cffDNA using the plasma or serum of pregnant mothers.

But the second half of this sentence shows the critical distinction that makes Sequenom’s claims patent-eligible, even in view of Mayo. Unlike the claims analyzed in Mayo, Sequenom’s process is new and not routinely engaged in by researchers in the field. Judge Linn even states the point better elsewhere in his own concurrence:

Unlike in Mayo, the ‘540 patent claims a new method that should be patent eligible. While the instructions in the claims at issue in Mayo had been widely used by doctors—they had been measuring metabolites and recalculating dosages based on toxicity/inefficacy limits for years—here, the amplification and detection of cffDNA had never before been done.

Judge Linn should be praised for critiquing Mayo as bad law that has led to the invalidation of untold meritorious patent claims. Unfortunately, however, he may have unintentionally contributed to the expansive scope of Mayo about which he complains by failing to factually distinguish (and hence cabin) the Supreme Court’s opinion when presented with such a good opportunity to do so.

All told, the Federal Circuit’s opinion in Ariosa v. Sequenom is a predictable, yet unfortunate, application of the Supreme Court’s disastrous reasoning in Mayo. The unintended consequences of the Supreme Court’s opinion have been further realized in the Federal Circuit’s denial of Sequenom’s innovative claimed method for diagnosing fetal abnormalities. Only time will tell how many other innovations will suffer under the Supreme Court’s careless expansion of Section 101’s patent eligibility analysis.

Categories
Innovation Intellectual Property Theory Inventors Legislation Patent Law Patent Litigation Patent Theory Uncategorized

Unintended Consequences of “Patent Reform”: The Customer Suit Exception

U.S. Capitol buildingIn the last two weeks, the House and Senate Judiciary Committees marked up wide-ranging patent legislation ostensibly aimed at combating frivolous litigation by so-called “patent trolls.” But while the stated purpose of the House and Senate bills—H.R. 9 (the “Innovation Act”) and S. 1137 (the “PATENT Act”), respectively—is to combat abusive litigation, a closer look at the actual language of the bills reveals broad provisions that go far beyond deterring frivolous lawsuits. This far-reaching language has raised concerns in the innovation industries that, instead of curbing ambulance-chasing patentees, Congress is preparing to fundamentally weaken the property rights of all inventors, emboldening patent infringers in the process.

The “customer suit exception” or “customer stay” provisions that appear in both bills are particularly troubling. These provisions direct courts to stay patent infringement suits against “retailers” and “end users” in favor of suits involving manufacturers higher up the supply chain. While the basic idea makes sense—we’ve all heard stories of coffee shops being sued for patent infringement because of the Wi-Fi routers they used—the provisions are drafted so broadly and inflexibly that they invite abuse and gamesmanship by infringers at the expense of legitimate patent owners.

Both the Innovation Act and the PATENT Act provide that “the court shall grant a motion to stay at least the portion of the action against a covered customer” that relates “to infringement of a patent involving a covered product or covered process” if certain conditions are met. The first condition in both bills is that the “covered manufacturer” must be a party to the same action or to a separate action “involving the same patent or patents” related to “the same covered product or covered process.” In other words, so long as the manufacturer is challenging the patentholder, the customer is off the hook.

The two main problems here are that (1) the definition of “covered customer” in both bills is exceedingly broad, such that almost any party can claim to be a “customer,” and (2) the provisions leave the courts no discretion in deciding whether to grant a stay, forcing them to halt proceedings even when it’s not warranted.

Both bills define “covered customer” as “a retailer or end user that is accused of infringing a patent or patents in dispute.” “Retailer,” in turn, is defined as “an entity that generates” its “revenues predominantly through the sale to the public of consumer goods and services,” and it explicitly excludes “an entity that manufactures” a “covered product or covered process” or “a relevant part thereof.” Thus, a “retailer” is a “customer,” but a “manufacturer” is not.

This language is far broader than necessary to achieve the stated purpose of protecting downstream retailers and end users. The Senate’s section-by-section breakdown of the PATENT Act claims that the “customer stay is available only to those at the end of the supply chain.” But the actual definitions in both bills are so broad that almost any entity in the supply chain would be eligible for a mandatory stay. This is so because almost all manufacturers are also retailers of other manufacturers; that is, almost all manufacturers could claim to be a “customer.”

Take, for example, a smartphone company that sources its components from a third-party manufacturer. If the smartphone company were sued for patent infringement over a component, it could claim to be a “covered customer” under both bills. Many smartphone companies generate “revenues predominantly through the sale to the public of consumer goods and services,” and they would not be considered “an entity that manufactures” the component. As a “retailer,” the smartphone company would be entitled to a mandatory stay, even though it’s nothing like the mom-and-pop coffee shop the customer stay provisions are designed to help. A district court would be forced to grant the stay, even if doing so hampered a legitimate patentholder’s ability to enforce its property right.

Against this backdrop, it’s important to keep in mind that the decision to stay proceedings has historically been left to the discretion of judges. Sometimes there are indeed good reasons to grant a stay, but each case is unique, and courts frequently weigh many factors in deciding whether a stay is appropriate. Instead of recognizing this dynamic, the Innovation Act and the PATENT Act mandate a one-size-fits-all solution to an issue that is best determined on a case-by-case basis. In effect, the bills tie the hands of district court judges, forcing them to stay suits even when the equities dictate otherwise.

While in some cases a manufacturer may be the more appropriate party to litigate a patent suit, it is not always true that efficiency or justice dictates staying a suit against a customer in favor of litigation involving the manufacturer. Courts generally balance several factors, such as convenience, availability of witnesses, jurisdiction over other parties, and the possibility of consolidation, when deciding whether to grant a stay. Courts consider whether the stay will lead to undue prejudice or tactical disadvantage, and they examine whether it will simplify the issues and streamline the trial. The decision to stay involves an extensive cost-benefit analysis for both the court itself and the litigants.

The Supreme Court has often emphasized the importance of judicial discretion in deciding whether a stay is warranted. As Justice Cardozo wrote for the Court in 1936, the decision to stay “calls for the exercise of judgment, which must weigh competing interests and maintain an even balance.” Justice Cardozo warned that the judiciary “must be on our guard against depriving the processes of justice of their suppleness of adaptation to varying conditions.” In the patent law context, Justice Frankfurter, writing for the Court in 1952, declared: “Necessarily, an ample degree of discretion, appropriate for disciplined and experienced judges, must be left to the lower courts.”

The problem with the House and the Senate bills is that they take away this important “exercise of judgment” and threaten to remove much-needed flexibility and adaptation from the litigation process. The customer stay provisions take the “ample degree of discretion,” which is “appropriate for disciplined and experienced judges,” and place it into the hands of the alleged infringers. Infringers are not likely to be motivated by important notions of efficiency or justice; they’re likely to be motivated by self-interested gamesmanship of the system to their own advantage.

The proponents of the customer stay provisions claim that they’re necessary to help the little guy, but the provisions in both bills just aren’t drafted like that. Instead, they’re drafted to tie the hands of judges in countless cases that have nothing to do with small-time retailers and end users. The courts already have the power to stay proceedings when the equities tip in that direction, but these bills disrupt the judicial discretion on which the patent system has long depended. Customer stays certainly have their place, and that place is in the hands of judges who can take into account the totality of the circumstances. Judges should not be forced to make the important decision of whether to grant a stay based on overbroad and inflexible statutory language that goes far beyond its stated purpose.

Categories
Commercialization Economic Study High Tech Industry Innovation Law and Economics Legislation Patent Law Patent Licensing Patent Litigation Software Patent Uncategorized

Cohen et al. “Patent Trolls” Study Uses Incomplete Data, Performs Flawed Empirical Tests, and Makes Unsupportable Findings

PDF summary available here

I.   Introduction

A recent draft study about patent licensing companies entitled “Patent Trolls: Evidence from Targeted Firms is making the rounds on Capitol Hill and receiving press coverage. This attention is unfortunate, because the study is deeply flawed and its conclusions cannot and should not be relied upon. If the draft paper is ever published in a peer reviewed journal, it will certainly need to be greatly revised first, with its most notable results likely changing or disappearing.  In sum, the study should receive no credit in policy debates.

The study, by Lauren Cohen, Umit G. Gurun, and Scott Duke Kominers, finds that non-practicing entities (NPEs) are “opportunistic” because they target defendants that (1) are cash-rich (particularly compared to practicing entity patentees), (2) operate in industries that “have nothing to do with the patent” in suit, (3) are staffed by small legal teams, and (4) are busy with numerous non-IP cases. Additionally, the authors conclude that defendants that lose in patent litigation with NPEs on average have marked declines in subsequent R&D expenditures, on the order of $200 million per year. On this basis, the authors suggest “the marginal policy response should be to more carefully limit the power of NPEs.” One of the authors has been circulating this unpublished study to congressional staffers to make the case that NPEs have a large negative effect on US innovation.

II.   Critique of the Study

Professor Ted Sichelman, University of San Diego School of Law, and an expert in empirical studies of patent litigation, critiques the most recent, publicly available version of the Cohen et al. study in detail in his response paper, “Are Patent Trolls ‘Opportunistic?”.[1] He finds that the study’s dataset is incomplete and unrepresentative, its theoretical model is flawed, and its empirical models are unsound. Professor Sichelman concludes that neither their findings nor policy prescriptions are justified. Major weaknesses in the study are as follows:

  • The study’s public firm defendant dataset in current version of paper is incomplete and unrepresentative
  • The study relies on proprietary, unverified coding from PatentFreedom that groups together numerous NPE types (including individuals, R&D shops, and IP holding companies of operating companies), but in making its policy recommendations, the study assumes all NPEs are patent aggregators
  • The study’s finding that NPEs sue cash-rich defendants may simply be driven by the fact that NPEs tend to target software, Internet, and finance-related companies for reasons unrelated to cash holdings, but these companies simply happen to have larger cash-holdings than the average publicly traded company
  • When comparing NPE behavior to that of operating companies, the study improperly includes operating company suits in which the patentees primarily seek injunctions, which are not cash-driven suits
    • Our belief is that NPEs and operating companies alike that primarily seek royalties are likely to seek defendants with enough cash to pay likely damage awards and—like a seller of goods ensuring that a buyer has sufficient cash to pay for those goods—there is nothing “opportunistic” in this behavior
  • NPEs asserting patented technology that is different from the primary industry of the accused infringer are typically not going “after profits unrelated to the patents”
    • For instance, the use of patented computer hardware, software, or technical equipment may occur in any industry and provide a competitive advantage relative to others using non-patented technology
  • The study’s datasets and variables to determine the size of law firm and the number of pending cases are incomplete and flawed
  • The authors’ finding that R&D of accused infringers is differentially affected by a “loss” is based on a very small dataset of “wins” (n=35)

In sum, there is no support for the study’s policy recommendation “to more carefully limit the power of NPEs.” In this regard, we reiterate our view that any plaintiff targeting defendants with enough cash to satisfy a damages judgment is simply ordinary litigation behavior. According to Professor Sichelman, there is “massive risk aversion by many small NPEs” and “large uncertainty in [patent] cases” that may cause any patentee primarily seeking money damages to assert its patents against defendants who can pay their bills.

Finally, in making their policy proposals to restrict NPEs, Cohen et al. rely on the discredited study of Bessen and Meurer (2014) to argue that NPEs do not channel a large percentage of funds received back to inventors. As Schwartz and Kesan (2014) have shown, Bessen and Meurer’s study is inapplicable to most NPEs, because only 12 publicly traded aggregators were examined, and even for those 12 aggregators, Schwartz and Kesan persuasively argue that Bessen and Meurer’s findings are wrong. Indeed, there is ample evidence that many patent aggregators return 50% of net recoveries in litigation or licensing (i.e., after paying for attorneys’ fees and related costs) and that many NPEs are individuals, R&D shops, and other entities that effectively keep 100% of the net returns from recoveries.

As such, the Cohen et al. (2014) study should receive no credit in congressional policy debates. Indeed, another leading academic at a recent conference expressed surprise and dismay that this early-stage study was being circulated by its authors throughout Congress.

Notes:

[1] The authors presented new material in response to Sichelman’s critique at a recent conference, but as far as we know, they have not made any of it available to the general public. As such, we focus on Sichelman’s critique of the most recent, publicly available version of the study.

Categories
Commercialization Conferences Copyright Copyright Licensing Copyright Theory Economic Study High Tech Industry History of Intellectual Property Injunctions Innovation Intellectual Property Theory Internet Inventors Law and Economics Patent Law Patent Licensing Patent Litigation Patent Theory Remedies Software Patent Uncategorized

Intellectual Property Unites Creators and Innovators

This is the first in a series of posts summarizing CPIP’s 2014 Fall Conference, “Common Ground: How Intellectual Property Unites Creators and Innovators.” The Conference was held at George Mason University School of Law on October 9-10, 2014. Videos of the conference panels and remarks, as well as panel summaries, will be available soon.

Introduction by Professors Adam Mossoff and Mark Schultz

Common Ground: How Intellectual Property Unites Creators and Innovators

The creative industries and innovation industries have much in common, but too often this is overlooked. Both industries engage in brilliant intellectual work to bring new products and services into the world, both take great risks to commercialize this work, and both depend on intellectual property – copyrights (for the creative industries) and patents (for the innovation industries). Unfortunately, most accounts of these two industries emphasize their differences and frequently portray them in conflict.

This conference will explore the common ground shared by these two dynamic industries, focusing on the similar values secured by their patents and copyrights and thus their common policy goals and commercial developments.

It should be unsurprising that these two industries share much in common. The work of inventors and artists is much the same. We see hints of this in their respective aspirations. Engineers, for example, often talk of seeking “elegant” or “beautiful” solutions to the technological problems they face. Artists also strive to innovate technically in how they create their works, as demonstrated with much panache in the recent documentary, Tim’s Vermeer. Many creators apply their prodigious talents to both art and invention.

One may think of a Steve Jobs today as exemplifying this truth, but history is replete with examples. Leonardo da Vinci also comes to mind, the quintessential Renaissance Man. In the 19th century, Samuel Morse invented the telegraph, but he was also a successful artist and in fact he developed the telegraph while working as a well-known Professor of Art at New York University.

In modern America, Walt Disney has defined much of our culture not just with his artistic creations, but also with his innovative technological creations in movies, theme parks and products. More recently, filmmakers George Lucas and James Cameron have cast large shadows in popular culture, but their contributions to filmmaking technology may prove even more enduring and pervasive.

These and many other examples are unsurprising when one considers that art and technology both result from the same source: productive intellectual labor.

As the work of artists and inventors is at heart the same, so is the moral and economic case for securing property rights to them. Artists and inventors deserve to own the fruits of their productive labors. In protecting these labors, intellectual property rights secure to them their liberty and their careers. These rights thus fuel the vast economic activity that drives the innovation economy – bringing to market the products and services that ensure full and flourishing lives for them and for the rest of us as well.

Too often, though, the creative and innovation industries are portrayed as being at odds. One popular narrative today – in both scholarly and popular accounts – is that technology disrupts the creative industries, forcing copyright owners to adapt. This is a myopic account of their relationship that ultimately creates a false picture. In truth, creativity and innovation – secured by copyrights and patents – constantly spur each other to greater heights.

The true story of creativity and innovation is more properly viewed as a virtuous circle.

Recording and broadcast technology, for instance, gave musicians and other performers their first worldwide audiences, whose demand for ever-more entertainment and information spurred further improvement and expansion of technology. The invention of the electric guitar, spurred by a series of patented improvements, enabled blues and rock ‘n’ roll, which in turn pushed further developments in music and recording technology.

The Internet certainly created much disruption, but it also has been a fountainhead of creativity. To take just one example, streaming of original, creative content enables television viewers to enjoy storytelling as never before, bringing about what some are now calling a Second Golden Age of Television.

Our technological devices, such as smartphones and iPads, would not be so well loved and so ubiquitous without the games, music, and video content they deliver to hundreds of millions of people the world over.

The common ground and shared aspirations of creators and innovators is clear, but rarely appreciated in the din of today’s policy debates.

Thus, our Annual Conference this year considers afresh the common goals, challenges and needs of the creative and innovation industries. Many distinguished speakers with extensive knowledge and experience in both fields will address how intellectual property rights represent the bedrock of this common ground. We hope that you will enjoy what promises to be enlightening discussion.

**Panel summaries coming soon**

Categories
Administrative Agency Antitrust Commercialization Damages DOJ Economic Study FTC Injunctions Innovation Law and Economics Patent Law Patent Licensing Patent Litigation Reasonable Royalty Remedies Uncategorized

Patent Policy Debates Characterized by "Intolerably High Ratio of Theory to Evidence"

In an interview with Law360 last week, FTC Commissioner Joshua Wright spoke about the FTC’s upcoming study on PAEs and the state of today’s patent policy debates. The interview is well-worth reading in it’s entirety, and we’ve also highlighted a couple key quotes below.

“One of the most fascinating things about the the policy debates in and around patents and by extension the intersection of patent law and antitrust law, is that most of the debate is chock full of theory and supposition but completely devoid of empirical evidence…It is very difficult to move forward sensibly in debates with those characteristics”

“Wright said that without evidence of ‘pervasive market failure’ in the standard setting space, the FTC and the U.S. Department of Justice should avoid the temptation to serve as ‘management consultant’ to standard setting groups and their members.”

Categories
Copyright Copyright Theory History of Intellectual Property Innovation Intellectual Property Theory Law and Economics Patent Law Patent Litigation Patent Theory Statistics Uncategorized

Intellectual Property, Innovation and Economic Growth: Mercatus Gets it Wrong

By Mark Schultz & Adam Mossoff

A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.

How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.

We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.

Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.

The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.

For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.

This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?

Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.

This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?

The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.

As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.

Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.

In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.

For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.

IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.

After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from far worse problems than anything alleged in their critiques of the USPTO and other studies.

Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.

Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.

It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.

Categories
Legislation Patent Law Patent Litigation Supreme Court Uncategorized

Supreme Court Revises Fee-Shifting Rules in Patent Cases: Weeding out Bad Actors in a Level Playing Field

By Adam Mossoff* and Brian O’Shaughnessy
Originally published in LES (USA & Canada)’s weekly e-newsletter, Insights.

On April 29, 2014, the Supreme Court handed down two unanimous decisions in Octane Fitness v. ICON Health & Fitness and Highmark v. Allcare Health Management System, which radically overhaul the rules governing court-awarded attorneys’ fees in patent cases. In brief, the Supreme Court has empowered district courts with greater authority to discourage bad actors in patent litigation. Yet, the Octane Fitness and Highmark decisions preserve valuable incentives for innovation in an equitable system for patent owners who properly assert valid property rights.

Octane Fitness and Highmark confirm that it is the proper role for courts to address problems created by bad faith patent litigation. This is a lesson that should be heeded by Congress, as it works toward new legislation further modifying our patent laws, including mandating fee-shifting in patent cases. Octane Fitness and Highmark counsel Congress to reconsider revising our patent laws, and instead allow courts to do what they do best —manage their dockets and craft rules and procedures to ensure the just and efficient resolution of disputes under federal law.

The legal issue in Octane Fitness and Highmark centered on what is the proper application of 35 U.S.C. § 285, which authorizes courts to award attorney fees “in exceptional cases” to the prevailing party. Since its inception, § 285 has been recognized as an exception to the “American Rule” (i.e., each party pays its own attorney fees regardless of the outcome). The Federal Circuit has construed § 285 as permitting fee-shifting in only two limited circumstances: An award of attorneys’ fees is permissible only when it is proven by clear and convincing evidence that (1) “there has been some material inappropriate conduct,” or (2) the litigation was “brought in subjective bad faith” and was “objectively baseless.”

The Octane Fitness Court found the Federal Circuit’s construction of § 285 imposed “an inflexible framework onto statutory text that is inherently flexible.” More specifically, the Octane Fitness Court struck down the Federal Circuit’s test because it conflated the “exceptional case” standard with other legally independent bases of punishing misconduct by a patent owner, such as the inequitable conduct doctrine that is similar to factor (1) of the Federal Circuit’s § 285 test. In accord with its plain meaning, the Court construed an “exceptional case” to mean simply a case that “stands out from others with respect to the substantive strength of a party’s litigating position (considering both the governing law and the facts of the case) or the unreasonable manner in which the case was litigated.”

Following its construction of “exceptional case” based on both its plain meaning and the legislative history for § 285 (Justice Scalia refused to join the three footnotes in Justice Sotomayor’s opinion that discussed the legislative history), the Octane Fitness Court held that a district court should have the discretionary authority to award attorney fees if the prevailing party has proven by a preponderance of evidence that the “totality of the circumstances” justify such an award. The Octane Fitness Court thus lowered both the evidentiary and the substantive legal requirements for a district court in awarding attorney fees under § 285—it is now a case-specific, holistic, equitable analysis more in accord with the approach traditionally taken by courts.

In a companion case, the Court in Highmark bolstered the authority of district courts to manage their dockets and to punish bad actors by revising the standard of review employed by the Federal Circuit in reviewing fee-shifting decisions under § 285. The Federal Circuit has reviewed de novo fee awards under § 285. Highmark holds that this was contrary to the fact-specific nature of § 285. Thus, Highmark reversed the Federal Circuit, holding that the discretionary award of attorneys’ fees under § 285 should be reviewed by an appellate court only for abuse of discretion.

Standing together, Octane Fitness and Highmark empower district courts to hold accountable those who engage in improper or abusive litigation. This is a valuable reminder that patent infringement suits are merely another form of federal civil litigation, and abusive behavior may be addressed with rules and practices traditionally employed by federal courts.

The carefully crafted holdings in Octane Fitness and Highmark stand in stark contrast to broad legislation pending in Congress. For example, in December 2013, the House passed H.R. 3309, which would impose mandatory fee-shifting to the non-prevailing party. Fees must be paid by the losing party unless it can prove that its position was “substantially justified” or that “special circumstances” would make an award of attorney fees unjust. This would create the same “inflexible framework” and inequitable “generalizations” for which the Supreme Court criticized the Federal Circuit in its prior application of § 285.

H.R. 3309 would penalize patent owners by creating an exception to the American rule. This would affect all patent owners, not just bad actors, and would increase risk, and therefore harm individual inventors, start-ups, small companies, universities, and other entities that create and license patented inventions. The risk of bearing the financial burden of both parties’ attorney fees would be an enormous disincentive for any poorly capitalized patent owner in enforcing its legitimate property rights in court; this would thus discourage, rather than promote, the progress of the useful arts.

Octane Fitness and Highmark will promote more responsible behavior among patent litigants.  Under the principles enunciated in these two cases, patent owners must be more careful in conducting pre-litigation due diligence.  Before bringing suit, patent owners would be well advised to retain objective and competent counsel to conduct a thorough infringement investigation, and to document the bases on which a suit might be brought.  Failure to procure such an opinion before bringing suit would increase a patent owner’s exposure to a challenge that the suit was brought without the requisite good faith basis, and with that, a claim for attorney fees.

A legislative approach that mandates fee shifting is a dramatic shift from the traditional American rule. It hinders a judge’s discretion and the court’s inherent power to impose appropriate sanctions. It will limit remedial measures that judges might otherwise employ to promptly and cost-effectively resolve cases and discourage litigation abuse. Further, it will invite gamesmanship to protect the bad faith litigant from suffering the fate of a “non-prevailing party.”

Case by case implementation of the Octane Fitness and Highmark decisions will afford equitable and predictable enforcement of legitimate patent rights, promoting innovation while preserving the power of the courts to punish bad actors. Overly broad legislation, however, will increase uncertainty and risk in patent enforcement, weaken the patent property right, diminish the commercial value of patents, and discourage innovation. This will have devastating consequences on the innovation economy, and will suppress economic development and job creation. Octane Fitness and Highmark show that judges, who witness abusive behavior firsthand, are in the best position to address and remedy bad faith litigation and abusive tactics. This is why we have traditionally left it to the courts to address such tactics. The Supreme Court has now clarified and enhanced that power. Congress should defer to the courts in exercising it.

* Professor of Law at George Mason University School of law. Professor Mossoff is also co-founder and Co-Director of Academic Programs of the Center for the Protection of Intellectual Property at Mason Law. Thank you to Steve Tjoe for his assistance with this essay.

† Regional Vice President, USA, of the Licensing Executives Society (USA and Canada), Inc.; and Shareholder in the law firm of RatnerPrestia, Washington, DC.

Categories
Commercialization Conferences High Tech Industry Innovation Intellectual Property Theory Law and Economics Legislation Patent Law Patent Licensing Patent Litigation Patent Theory Software Patent Uncategorized

The Unintended Consequences of Patent "Reform"

By Steven Tjoe

Much of today’s patent policy debate focuses on the dynamics of patent litigation.  Sensational anecdotes of abusive demand letters, litigants strategically exploiting bad patents, and tales of so-called “patent trolls” (reinforced by now debunked empirical claims) have captured the public’s imagination and spurred Congress to rush to revise the patent system.  Unfortunately, the fervor to address perceived patent litigation abuses often overlooks the substantial unintended consequences of recent and proposed legislation.

CPIP and WARF’s recent conference, From Lab to Market: How Intellectual Property Secures the Benefits of R&D, featured a panel designed to fill this void in the conversation.  Instead of myopically focusing on trolls and litigation abuse, the panelists, Eb Bright, Robert Sterne, and Carl Gulbrandsen, brought the discussion back to reality and addressed the greater context of how recent and proposed changes to patent law impact our innovation ecosystem at large.

First, an understanding of how ideas are developed and brought to market is crucial to evaluating the ramifications of patent legislation.  Eb Bright, Executive Vice President and General Counsel of ExploraMed, illustrated this often-overlooked process from the perspective of the medical device industry.  In the world of medical device development, the financial risks of bringing an idea to market are very high.  The cost from conception to market can range from approximately $75 million for low-risk devices to approximately $135 million for high-risk devices.  Additionally, it takes 8-10 years on average to begin seeing a return on investment.

The result is that innovators in the medical device space – mostly small start-up companies – must secure significant financing from venture capitalists and other investors to keep their companies alive during this lengthy process.  Strong patents are fundamental to securing this financing.  They are essential to keeping competitors from free-riding on a company’s work and poaching its investors’ returns.  Investors are loathe to finance a start-up without confidence that the company can protect its intellectual property (which often accounts for a significant portion of the company’s value) from free-riders.  In this fragile innovation ecosystem, legislation that weakens patents and makes it harder for small companies to enforce their patent rights could have devastating consequences on start-ups’ ability to secure essential financing.

Carl Gulbrandsen, Managing Director of WARF, discussed proposed patent legislation from the perspective of a large university technology transfer office.  As the University of Wisconsin’s licensing arm, WARF licenses university patents and returns approximately $80 million a year to the university to support further research.  This symbiotic relationship fuels research and also adds significant value to the university’s inventions.  By marketing and licensing inventions to companies (often small start-ups) that take on the substantial effort of turning those inventions into actual products, WARF plays a crucial role in moving innovation from the lab to the marketplace.  Importantly, strong patent rights lie at the center of this virtuous cycle.

Mr. Gulbrandsen observed that proposed legislation would disrupt this process by making it substantially more difficult for universities to enforce their patents, and therefore substantially more difficult for universities to license and commercialize their inventions.  While established organizations like WARF may be able to handle the increased costs and risks, at the margin fewer universities would be able to license their intellectual property.  The result is that fewer inventions would move from lab to market, and universities would have less revenue to fuel future research.

It is against this backdrop that efforts to revise our patent system occur.  Overbroad “patent abuse” legislation that fails to appreciate the economic realities of our innovation ecosystem can lead to significant unintended consequences.  Robert Sterne, Director of Sterne Kessler, illustrated some unintended consequences from the last major patent “reform” legislation, the America Invents Act of 2011 (AIA).  In particular, Mr. Sterne addressed issues arising from the Inter Partes Review (“IPR”) and Covered Business Method Patent Review (“CBM”) procedures implemented under the AIA.

Mr. Sterne spoke about trial practice before the USPTO Patent Trial and Appeal Board (“PTAB”), noting that Rule 42.1(b) establishes that the rules should “be construed to secure the just, speedy, and inexpensive resolution of every proceeding.”  While the resulting procedures are certainly speedy (cases proceed through the USPTO and through appeal at the Federal Circuit within 2 years) and are cheaper than District Court proceedings, the procedures are far from just, and have proved particularly unforgiving for patent owners as a result of vast departures from well-established rules and procedures utilized by the courts.

Mr. Sterne explained how the new IPR procedures include more limited claim construction rules, less stringent burdens of proof to invalidate a patent, and less opportunity to adequately prove non-obviousness.  Of particular concern to patent owners is the inability to show non-obviousness.  In District Court, patent owners generally show non-obviousness by telling the story of the invention.  Inventors recount the state of the technology prior to their invention and the contributions their invention made.  By contrast, PTAB’s narrow time limitations and constraints on responses filed strip patent owners of the ability to do the same in IPR proceedings.

Consequently, the trial outcomes under the new system have yielded startlingly negative results for patent holders.  As of March 7, 2014, the PTAB had issued 19 Final Written Decisions on the merits for IPRs and CBMs.  In all but three of these proceedings, the Board cancelled all claims for which trial was instituted.  In total, 95.2% of all claims for which trial was instituted were cancelled and 82.9% of all claims that were initially challenged by the petitioner were cancelled.

Furthermore, IPR proceedings are always available and may stand alone or exist as part of a litigation strategy.  A patent owner does not have to take any action before being challenged.   New business entities, such as subscription services designed to work around the estoppel provisions, are already being formed to capitalize on the lopsided nature of the process.  It’s important to note that the constant threat of IPR and the risks and costs associated with it are not only detrimental to patent owners, they also affect our entire innovation ecosystem.

The central takeaway from the panel was this:  As we consider patent legislation ostensibly designed to curb abusive litigation, it is crucial to consider the potential unintended consequences of weakening patent rights across the board.  We must recognize the economic realities of our innovation ecosystem, and we must narrowly tailor any solutions to address the limited instances of abuse without harming start-ups, universities, and all the other patent owners that fuel our innovation economy.

Categories
Antitrust Commercialization Damages DOJ Economic Study FTC High Tech Industry Innovation International Law Law and Economics Patent Law Patent Licensing Patent Litigation Reasonable Royalty Remedies Software Patent Uncategorized

An Insightful Analysis of “Fair and Reasonable” in the Determination of FRAND Terms

By Steven Tjoe

In his forthcoming George Mason University Law Review article entitled “The Meaning of ‘Fair and Reasonable’ in the Context of Third-Party Determination of FRAND Terms,” Professor Damien Geradin explores the delicate balance of interests protected by the current system of arm’s length negotiations in the standard-setting process, and the detrimental effect disrupting this balance would have on standards-related technologies and our innovation economy.

Fair, reasonable, and non-discriminatory (“FRAND”) commitments are the subject of frequent criticism in both legal and economic literature.  Many policymakers, practitioners, and academics have argued that the inherent ambiguity in establishing “fair and reasonable” terms creates inefficiencies and perverse incentives for standard-essential patent (“SEP”) holders to exercise ex post opportunism.  Based on this belief, some now argue that the standard-setting organization (“SSO”) contracting process is broken and requires additional legal and regulatory mechanisms to afford standard implementers greater protection.

Professor Geradin’s article brings some much-needed balance to this debate.  By highlighting the economic principles and the carefully negotiated terms underlying current SSO contracting processes, Geradin exposes the pitfalls of many of the reforms suggested.  Geradin’s analysis elucidates the SSO contracting process itself through dissection of the intensive discussions and negotiations giving rise to the prominent ETSI Intellectual Property Rights (“IPR”) policy, a policy that played a fundamental standardization role in the wireless communication field.  The ETSI IPR policy shows that its members understood the notions of “fairness and reasonable” to define a fair balance between the interests of SEP holders and standard implementers – securing the availability of the standards while simultaneously ensuring that SEP holders are “adequately and fairly rewarded for the use of their [intellectual property rights].”

Professor Geradin addresses two potential forms of ex post opportunism – “hold-up” and “royalty stacking” – and observes that though both could occur in theory, there is little evidence to suggest that they occur in real-world patent licensing.  Regarding the hold-up conjecture, Geradin observes that the relative absence of hold-up is consistent with the economics of contracting: parties who repeatedly deal with each other will limit opportunism to protect their reputation.  Similarly, royalty stacking is a rare occurrence in high-technology, where cross-licensing is common and greatly diminishes the risk of royalty-stacking.  Given the absence of empirical evidence demonstrating opportunistic behavior by SEP holders, Geradin cautions against implementing reforms that systematically weaken the bargaining power of SEP holders, as proposed reforms may themselves trigger reciprocal opportunistic behavior – such as “reverse hold-up” – by standard implementers.

In the context of FRAND licensing, Geradin observes that for rewards to be adequate and fair, they must not only compensate SEP holders for their risky R&D investments (including investments in prior failed projects), they must also give SEP holders sufficient incentive to keep investing in the development of standardized technologies.  The negative consequences of systematically offering below-FRAND terms to SEP holders are two-fold.  First, as Geradin eloquently observes, “[i]t is a basic law of finance that capital flows where the best opportunities arise,” and developers of technologies in standardized sectors unduly constrained by low returns may seek opportunities outside the standardized sectors.  Second, without adequate returns, major technological contributors may decide to no longer participate in SSOs in order to avoid being bound by FRAND commitments.  As a result, standards would likely fail to incorporate the best technology available.

Accordingly, Geradin is skeptical of many of the policy measures suggested to provide additional protections to potential licensees and consumers of standardized technologies.  One such measure is the “ex ante incremental value method,” where the rate that would have resulted from ex ante competition between the technology in question and alternative technological solutions serves as a benchmark to whether a royalty is fair and reasonable.  As Geradin observes:

While the pricing of SEPs at incremental value may facilitate the dissemination of the standard in the short-term, the licensing fee resulting from the incremental value of the SEP holder’s technology would certainly not be enough to properly compensate the investment costs and risks [a] company incurred in developing its superior technology, as well as to incentivize it to make investment in new technologies.

With respect to this method, Geradin concludes that the “ex ante incremental rule is thus not so much an instrument to prevent the theoretical risks of hold-up, but a tool to lower royalty rates to the benefit of standard implementers.”  As such, the ex ante incremental value rule could potentially have a devastating impact on innovation incentives and standards.

Geradin next explores the multi-factor test contained in Georgia-Pacific Corp. v. United States Plywood Corp. (“Georgia-Pacific”).  In Georgia-Pacific, a federal district court established a framework by which fifteen factors offering a variety of benchmarks are used to compute reasonable royalty damages by contemplating a “hypothetical negotiation” between a “willing licensor” and “willing licensee” at the time the infringement began.  Geradin observes:

A key strength of the Georgia-Pacific framework is that it is sufficiently flexible to establish a balance between the dual objective of SSO’s IPR policies … which are both to ensure standard dissemination and adequate remuneration of the SEP holder.  In other words, unlike abstract mathematical methods, which … can be easily tipped in favor of the prospective licensee (or the prospective licensor), the multi-factor test at the core of the Georgia-Pacific framework reduces the risk of bias if it is properly carried out.

As such, the Georgia-Pacific framework can better reflect the reality of contract negotiations, where the parties look to a variety of factors, and not some magic formula, to come to mutually acceptable licensing terms.

In the context of FRAND litigation, however, Geradin cautions against potential pitfalls of applying the Georgia-Pacific framework.  At the outset, Geradin notes that licensing agreements are often “highly relationship-specific and thus agreements will be hard to compare.”  Geradin discusses the practice of comparing the rate offered ex post standardization by SEP holders with the rate offered for the same patents ex ante standardization.  Though many are inclined to treat the ex ante rate as a “safe harbor” against any claim of opportunism, Geradin finds that there is little reason why licensors should be prohibited from charging higher rates ex post than ex ante.  Not only may ex post contracts be more efficient in the way they incorporate a clearer understanding of the technology and the market, but also forcing SEP holders to charge similar ex ante and ex post rates deprives SEP holders of giving preferential terms to early adopters of their technology.

Professor Geradin then explores whether patent pools offer a useful benchmark to determine FRAND license terms.  Due to the difficulties of forming pools and the different business models of the relevant patent holders, many standardized sectors simply do not have sizeable patent pools covering their standards.  Even where sizeable patent pools exist, Geradin observes that the pools often will not serve as the right benchmark for FRAND rate determination.  In many standardized sectors, such as in wireless communications, patent pools tend to be used by SEP holders to avoid transactions costs, rather than to obtain FRAND compensation.  Moreover, many patent pools base their method of remuneration on the number of a firm’s patents compared to the size of the pool rather than the relative strength of the patents themselves.  Where numerical proportionality serves as the metric of FRAND compensation, such as in the recent In re Innovatio IP Ventures LLC case, SEP holders have the incentive to inflate the number of patents they contribute to the pool.  Thus, using patent pools as a benchmark runs the risk of setting rates that are well below FRAND.

The potential welfare-reducing consequences of limiting the flexibility of the SSO negotiation process has been well documented in recent legal and economic literature.  As Professor Geradin observes, solutions to perceived FRAND inadequacies that aim to weaken the bargaining position of SEP holders often overreach, in effect triggering the “wholesale devaluation of patents.”  Instead, FRAND determinations should consider the “dynamic nature of standardization” and should be determined by balancing the need to (1) make standards available and, (2) fairly compensate SEP holders.  This delicate balance of interests is necessary to protect the future of standardization.