Categories
Biotech Gene Patents Innovation Inventors Uncategorized

How IP-Fueled Innovations in Biotechnology Have Led to the Gene Revolution

scientist looking through a microscopeWe’ve released a new issue paper, The Gene Revolution, by Amanda Maxham, a research associate and writer at the Ayn Rand Institute.

Dr. Maxham explores how innovations in biotechnology, enabled by the intellectual property rights that protect them, have led to the “Gene Revolution,” where scientists use genetic engineering to dramatically improve human life. In order to combat widespread misinformation about genetically modified organisms (GMOs), she traces mankind’s long history of improving plants, animals, and microorganisms to better serve our needs.

We’ve included the Executive Summary below. To read the full issue paper, please click here.

The Gene Revolution

By Amanda Maxham

Executive Summary

Mankind has been improving plants and animals for millennia. Simply by selecting and breeding those they liked best, our ancestors radically improved upon wild species. Today’s biological inventors, with a deeper understanding of genetics, breeding, and heredity, and with the protection of intellectual property rights, are using the technology of genetic engineering to start a “Gene Revolution.”

In the field of medicine, custom-built genetically engineered microorganisms are brewing up rivers of otherwise rare human hormones, life-saving medicines, and much-needed vaccines. In agriculture, scientists are combining their understanding of plant genetics with laboratory techniques of modern molecular biology to “unlock” the DNA of crop plants. By inserting genes from other plants or even common microorganisms, they are able to give plants desirable traits, solving problems that farmers have faced for millennia—faster and more precisely than ever before.

But despite its successes and a bright future, biotechnology is under attack by activists who spread misinformation and foster consumer mistrust. They have been directly responsible for onerous regulations and other hurdles to innovation that are threatening to stifle what could and should be the “third industrial revolution.”

In an effort to combat this misinformation, this paper situates genetic engineering within mankind’s long history of food improvement and then highlights how genetic engineering has dramatically improved human life. In it, you’ll find 29 plants, animals, and microorganisms, from insulin-secreting E. coli to engineered cotton, from cheese-making fungus to chestnut trees, that represent the promise and possibilities that the Gene Revolution holds–if we hold precious and continue to protect the freedom to invent and the power of scientific innovation.

Categories
Administrative Agency Innovation Inventors Patent Law Patent Litigation Uncategorized

#AliceStorm for Halloween: Was it a Trick or a Treat?

The following guest post from Robert R. Sachs, Partner at Fenwick & West LLP, first appeared on the Bilski Blog, and it is reposted here with permission.

By Robert R. Sachs

Alice has been busy the last two months, continuing to haunt the federal courts and the Knox and Randolph buildings at the USPTO. Here are the latest #AliceStorm numbers through the end of October 2015:

There have been 34 district court decisions in the past two months, but the percentage of invalidity decision is holding constant at 70.5%. The number of patent claims invalidated is now over 11,000, but also holding steady at around 71%.

There have been no new Federal Circuit Section 101 decisions, but we’re going to see a flurry of activity in the next couple of months, as the court has recently heard oral argument in a number of patent eligibility cases, and more are on calendar for November.

Motions on the pleadings have soared, with 23 in the past two months alone, and the success rate is up a tick from 70.1% to 71.4%.

PTAB is a bit mixed: the CBM institution rate is down from 86.2% 83.7%, but the final decision rate is still 100%, with 6 decisions in the past two months invalidating the patents in suit.

Turning to the motion analysis, the motions on the pleadings are the second scariest thing to a patent holder after the specter of attorney fees under Octane Fitness:

The Delaware district court continues as the graveyard of business methods and software patents, with 31 eligibility decisions, up from 19 just two months ago, and their invalidity rate is up from 86.4% to 90.3%.

Jumping into second place is the Eastern District of Texas, with 23 decisions total (up from 16). Contrary to the rest of the rest of the bench, their invalidity rate is 34.8%. The Northern District of California edged up from 75% to 78.9% invalidity, and C.D. Cal is up almost 2%.

And finally, here is the run down on the all district court judges with two or more Section 101 decisions.

With today’s blog, I’m introducing some entirely new types of data, looking at the characteristics of the patents that have been subject to Section 101 motions.

As expected, business method patents are the most heavily litigated and invalidated (click to see full size):

The distribution of patents in terms of earlier priority dates shows a very large fraction of the invalidated patents were first filed in 2000:

Now compare that to the distribution of patent classes with respect to priority year as well:

Here too we see a very large number of the business method patents filed in 2000. I’ve coded all of the software related technologies as blue to group them visually.

Why the cluster around 2000? State Street Bank, which held that there was no rule against business method patents, was decided in mid-1998. As those of us who were practicing them remember, it took about two years before the impact of the decision was widespread. This was also the time of the Dotcom bubble when it seemed that just about everyone was starting up a business on Internet. Those two factors resulted in a surge of patent filings.

Of all the patents that have been thus challenged under Alice, only two have post-Bilski priority dates:

  • 8447263, Emergency call analysis system, filed in 2011, and litigated in Boar’s Head Corp. v. DirectApps, Inc., 2015 U.S. Dist. LEXIS 98502 (E.D. Cal., 7/28/2015). The court granted DirectApps motion to dismiss, finding the patent invalid.
  • 8938510, On-demand mailbox synchronization and migration system, filed in 2010, and litigated in BitTitan, Inc. v. SkyKick, Inc., 2015 U.S. Dist. LEXIS 114082 (W.D. Wash, 8/27/2015). BitTran’s motion for preliminary injunction was denied in part because of SkyKick successfully argued that BitTrans was not likely to succeed on the merits due to Alice problems.
  • 8,604,943; 9,070,295; 9,082,097; 9,082,098 and 9,087,313, all of which claim priority to March, 2012, and were invalidated just last week in MacroPoint LLC v. FourKites Inc., Case. No. 1:15-cv-01002 (N.D. Ohio, Nov. 5, 2015). The court invalided all 94 claims in these patents, as being directed to the abstract idea of “tracking freight.” While the last four patents were issued in June and July, 2015, none of them overcome an Alice rejection, and the court noted that “Nothing in the Reasons for Allowance dictate a finding that these concepts are inventive on the issue of patent-eligible subject matter.”

Over time we’ll see more post-Bilski patents being litigated, and then eventually a true test: a business method patent granted after Alice that overcame an Alice rejection. By my count, there are about 80 such patents thus far, and about another 90 that have been allowed. It will not be too long then before one of these patents is challenged under Section 101.

In my next column, I’ll review some very disturbing decisions by coming out of the Delaware district courts.

Categories
Innovation Legislation Trade Secrets Uncategorized

Debunking Myths About the Proposed Federal Trade Secrets Act

By Mark Schultz

Today, CPIP is proud to release a paper authored by the nation’s preeminent expert on trade secret law, James Pooley. Mr. Pooley’s paper explains the arguments in favor of the Defend Trade Secrets Act of 2015 (“DTSA”), which is currently being considered by Congress. To download the paper, please click here.

The DTSA would create a federal cause of action for trade secret misappropriation. The legislation has been proposed via identical House (H.R.3326) and Senate (S.1890) bills. While trade secret theft has been a federal crime since 1996 pursuant to the Economic Espionage Act, civil claims have been left to state laws. The new bill would provide nationwide federal jurisdiction, while retaining the parallel state laws.

Trade secrets have become increasingly important at the same time they have become more vulnerable. Research in the US and Europe shows that trade secrets are the kind of IP most widely and universally relied upon by businesses. They are particularly important to small businesses. However, they can be stolen more easily than ever. Vital proprietary information that once would have resided in file cabinets and that would have taken days to copy now can be downloaded at the speed of light.

The DTSA is needed to improve the speed and efficiency of trade secret protection in the US. By some measures, as my own research for the OECD with my co-author Doug Lippoldt showed, the US has the strongest laws protecting trade secrets in the world. However, the multi-jurisdictional approach taken by the US presents a unique challenge to enforcing trade secrets quickly and efficiently. Investigating claims, conducting discovery, and enforcing claims in multiple states takes time. In an ordinary tort or contract case, such delays are usually manageable. In a trade secret case, even small delays can make the difference between rescuing a multi-million dollar secret and seeing its value destroyed utterly.

The proposed DTSA has enjoyed broad support from a coalition of large and small businesses. The bill has been largely uncontroversial, except among some legal academics. We have become accustomed to reflexive academic skepticism of improving IP rights, but some of the arguments against the DTSA have been truly puzzling.

The most puzzling academic argument against the bill is the claim that adding federal jurisdiction to trade secret enforcement will give rise to a new class of trade secret “troll.” It’s hard to see this claim as anything other than a mere rhetorical attempt to piggyback on the (largely specious) patent “troll” issue. According to research conducted for the European Commission, as well as widespread anecdotal evidence, firms routinely forego litigating trade secret claims for fear of revealing their proprietary information. It is thus hardly credible that they would expose their secrets in order to “troll,” especially merely because they now have easier access to federal courts.

Mr. Pooley’s paper explains the benefits of the DTSA while carefully refuting the “troll” myth and other arguments against the bill. The article includes a timely response to an academic letter released today expressing opposition to the DTSA.

Categories
Copyright Innovation Internet Uncategorized

Protecting Authors and Artists by Closing the Streaming Loophole

U.S. Capitol buildingWe’ve released a new policy brief, Protecting Authors and Artists by Closing the Streaming Loophole, by Devlin Hartline & Matthew Barblan.

They argue that in order to protect authors and artists from having their works repeatedly stolen on the internet, it is long past time to harmonize the remedies for criminal copyright infringement to reflect the ways that copyrighted works are commonly misappropriated these days.

We’ve included the Introduction below. To read the full policy brief, please click here.

Protecting Authors and Artists by Closing the Streaming Loophole

By Devlin Hartline & Matthew Barblan

Introduction

Copyright protects the property rights of authors and artists through both civil and criminal remedies for infringement. While the civil remedies are commonplace, the sections of the Copyright Act that specify which forms of infringement qualify as criminal offenses are less familiar. Unfortunately for authors and artists, the remedies for criminal infringement have not been updated to reflect the realities of how copyrighted works are frequently misappropriated these days. Streaming has become more popular than ever, yet the law treats bad actors who traffic in illicit streams much more kindly than those who traffic in illicit downloads. This results in a loophole that emboldens bad actors and makes it harder for authors and artists to protect their property rights.

Authors and artists deserve better. It shouldn’t matter whether the works are illegally streamed to users or offered for download. From the perspective of a creator whose property rights are being ripped off, the result is exactly the same—the works are supplied to the public without the creator’s permission. Congress has a long history of modernizing copyright law to account for ever-changing technologies. Now that the internet has advanced to where streaming is a dominant method of illicitly disseminating copyrighted works, the time has come to close the streaming loophole and to harmonize the remedies for criminal copyright infringement.

Categories
Administrative Agency Innovation Inventors Patent Law Patent Litigation Uncategorized

Overview of Comments on the USPTO's July 2015 Update to the Interim Examination Guidance

The following guest post from Robert R. Sachs, Partner at Fenwick & West LLP, first appeared on the Bilski Blog, and it is reposted here with permission.

By Robert R. Sachs

In late July, the USPTO issued its July 2015 Update to the 2014 Interim Section 101 Patent Eligibility Guidance (IEG). The July 2015 Update addresses a number of the issues and concerns raised in the public comments to the IEG and is supposed to assist examiners in applying the 2014 IEG during the patent examination process. The July 2015 Update also includes a new set of examples of claims involving abstract ideas and sample analysis under the Mayo framework. The USPTO is seeking public comments on the July 2015 Update, and comments are due on October 28, 2015, via email at 2014_interim_guidance@uspto.gov.

Here is an overview of what I think are the key issues and concerns with the July 2015 Update. Feel free to use any of my analysis in your comments to the USPTO.

1. Requirements of Prima Facie Case and the Role of Evidence

A significant number of the public comments on the 2014 IEG noted that examiners have the burden to make the prima facie case that a patent claim is ineligible, and that the Administrative Procedures Act (APA) and Federal Circuit case law requires that this determination be made based on “substantial evidence,” and not examiner opinion. In particular, all of the public comments that addressed this issue stated that examiners should have to provide documentary evidence to support a conclusion that a claim is directed to a judicial exception or that claim limitations are well understood, routine, and conventional.

In the July 2015 Update, the USPTO responded by stating that whether a claim is ineligible is a question of law and courts do not rely on evidence to establish that a claim is directed to a judicial exception, and therefore examiners likewise do not need to rely on any evidence that a particular concept is abstract, or a fundamental economic concept, or even a law of nature. The USPTO’s reliance on the judicial model is legally incorrect. First, examiners are bound by the APA and judges are not. Second, that eligibility is a question of law does not mean that there are not factual issues, as well—it merely determines whether the court or a jury is to make the finding. Obviousness is likewise a question of law, but there are clearly factual issues involved. Third, when judges take judicial notice, they are making a finding of fact, and they must do so under the requirements of Federal Rules of Evidence, Rule 201, which states that “The court may judicially notice a fact that is not subject to reasonable dispute because it: … can be accurately and readily determined from sources whose accuracy cannot reasonably be questioned.” This requirement is similar to the requirements of Official Notice set forth in MPEP 2144.03: “Official notice unsupported by documentary evidence should only be taken by the examiner where the facts asserted to be well-known, or to be common knowledge in the art are capable of instant and unquestionable demonstration as being well-known.” Thus, by its own logic, examiners should comply with the requirements of MPEP 2144.03.

As to the role of evidence, again the public comments that discussed this issue all took the position that examiners must cite authoritative documentary evidence, such as textbooks or similar publications to support a conclusion that a claim recites a judicial exception or that certain practices are well known, conventional or routine. The public comments on this issue all made the same argument: that the Supreme Court in Bilski and Alice cited references in support of their conclusions that the claims were ineligible.

In response to this uniform opinion, the USPTO maintained its position that citations of references was not necessary because the references in Bilski and Alice were technically not “evidence” since the Court is an appellate court, and further that the references were not necessarily prior art. This argument misses the point. Regardless of whether the references were evidence under the Federal Rules of Evidence, the Court felt it necessary and proper to cite them. Further, the Court did not cite references as prior art or suggest that they need to be prior art—rather, the Court cited the references as an authoritative basis to show that the claims were directed to longstanding, well-known concepts. That the Court did this not once, but twice, is strong guidance that the USPTO should follow suit.

Similarly, examiners should be instructed to accept and give substantial weight to documentary evidence submitted by applicants rebutting the examiner’s conclusions under either Step 2A or 2B of the Mayo framework. This includes declarations from the inventor or others showing that particular limitations are not considered judicial exceptions by a person of ordinary skill in the relevant technical or scientific community, or that claims limitations would be considered “significantly more” by such person, or that the claim limitations provide improvements to the art.

2. The Role of Preemption in the Mayo Framework

The majority of public comments stated that preemption is the core concern underlying the judicial exceptions to Section 101, and that the examiner should be required to establish that a claim preempts a judicial exception in order to find the claim ineligible. The USPTO again took an opposing view to this consensus interpretation, asserting that questions of preemption are inherently addressed in the two-part Mayo test. The USPTO also stated that “while a preemptive claim may be ineligible, the absence of complete preemption does not guarantee that a claim is eligible.” This has effectively eliminated arguments made by applicants that their claims were patent eligible because they did not preempt other practical applications of the judicial exception. Neither the Supreme Court nor the Federal Circuit has endorsed the concept that preemption does not matter given the Mayo framework. Instead, the courts continue to evaluate patent claims with respect to preemption even after the Mayo framework has been applied.

More significantly, the USPTO’s argument fails to address the more likely situation: that a claim blocks (preempts) only a narrow range of applications or implementations of the identified judicial exception. This is not merely a case of an absence of complete preemption; it is the absence of any significant degree of preemption at all. The Supreme Court recognized that preemption is a matter of degree and held that a claim is ineligible where there is a disproportionate risk that the judicial exception is fully preempted. In Alice, the Court stated:

The former [claims on fundamental building blocks] “would risk disproportionately tying up the use of the underlying” ideas, and are therefore ineligible for patent protection. The latter [claims with limitations that provide practical applications] pose no comparable risk of pre-emption, and therefore remain eligible for the monopoly granted under our patent laws.” 134 S.Ct. at 2354 (emphasis added).

Since by definition a claim must preempt something, it is only where the scope of the claim covers the full scope of the judicial exception that the claim is rendered ineligible. Judge Lourie, whose explanation of the Mayo framework in CLS v. Alice was directly adopted by the Supreme Court, put it this way:

Rather, the animating concern is that claims should not be coextensive with a natural law, natural phenomenon, or abstract idea; a patent-eligible claim must include one or more substantive limitations that, in the words of the Supreme Court, add “significantly more” to the basic principle, with the result that the claim covers significantly less. See Mayo 132 S. Ct. at 1294. Thus, broad claims do not necessarily raise § 101 preemption concerns, and seemingly narrower claims are not necessarily exempt. What matters is whether a claim threatens to subsume the full scope of a fundamental concept, and when those concerns arise, we must look for meaningful limitations that prevent the claim as a whole from covering the concept’s every practical application.

Thus, both the Supreme Court and the Federal Circuit use preemption as the mechanism to evaluate whether a claim is eligible or not by applying it on both sides of the question: ineligible if preemptive, eligible if not preemptive. In addition, over 100 district court decisions since Alice have expressly considered whether the claims preempt, even after applying the Mayo framework. If the Mayo framework inherently addressed the preemption issue as the USPTO asserts, there would be no reason for the courts to address it. Finally, by removing preemption from the Mayo framework, the USPTO has turned the framework into the sole test for patent eligibility—directly contrary to the Supreme Court’s holding in Bilski that there is no one sole test for eligibility.

Lourie’s statement that a claim is patent eligible when it includes “substantive limitations…with the result that the claim covers significantly less” than the judicial exception provides a simple and expedient basis for using preemption as part of the streamlined analysis–something the USPTO has resisted in the July 2015 Update. Examiners are well trained to evaluate the scope of a claim based on its express limitations. Accordingly, they can typically determine for the majority of claims that, whatever the claim covers, it has limitations that prevent it from covering the full scope of some judicial exception. If the point of the streamlined analysis is to avoid the unnecessary burden of the Mayo framework, then a preemption analysis provides the best way to achieve that goal.

Finally, to suggest that the Mayo framework is precise enough to be a definitive test is to ignore the obvious: both steps of the framework are undefined. See McRO, Inc. v. Sega of America,, Inc., No. 2:12-cv-10327, 2014 WL 4749601,at *5 (C.D. Cal. Sept. 22, 2014) (Wu, J.) (“[T]he two-step test may be more like a one step test evocative of Justice Stewart’s most famous phrase [‘I know it when I see it’].”). The Court refused to define the scope of abstract ideas in Alice (Step 2A), and Step 2B entails evaluating the subjective requirement of “significantly more” or “enough.” What is left, then, is analysis by analogy and example—and both common sense and life experience tell us that these approaches very often lead to mistakes. Analogies can be good or bad, and most examples can be argued either way. Preemption serves as a way of evaluating whether the outcome from such analysis is consistent with the underlying rationale for the judicial exceptions in the first place.

3. Abstract Ideas Must be Prevalent and Longstanding in the Relevant Community

The majority of public comments on the IEG argued that to establish that an idea is abstract, an examiner must show that the idea is “fundamental” in the sense of being “long-standing” and “prevalent,” following the statements of the Supreme Court. Various commentators suggested specific rules for examiners, such as evidence that the idea has been known and used in practice for a period of 25 or more years. Even those who supported a restrictive view of patent eligibility suggested that examiner should look to “basic textbooks” to identify abstract ideas.

The USPTO responded in the July 2015 Update by asserting that abstract ideas need not be prevalent and longstanding to be fundamental, arguing that even novel abstract ideas are ineligible: “examiners should keep in mind that judicial exceptions need not be old or long‐prevalent, and that even newly discovered judicial exceptions are still exceptions.” The USPTO stated that ”The term ’fundamental‘ is used in the sense of being foundational or basic.” This analysis begs the question. An idea is foundational or basic because it is widely accepted and adopted in the relevant community—it is fundamental to the practices of the community. Indeed, any textbook on the “foundations” of a particular scientific field would explain the principles and concepts that are long-standing and widely-accepted by scientists in that field. It would not be a significant burden on the examiner to cite to such publications to support a finding under Step 2A. Indeed, the inability of an examiner to do so would be strong evidence that a claim is not directed to a foundational or basic practice.

4. USPTO Reliance on Non-Precedential Federal Circuit Decisions

Public comments noted that the 2014 IEG included citations and discussions of non-precedential Federal Circuit cases, such as Planet Bingo, LLC v VKGS LLC, and SmartGene, Inc. v Advanced Biological Labs, and indicated that because the cases are non-precedential, they should not be cited and relied upon by the USPTO as the basis of its guidance to examiners. Further, it was pointed out that the 2014 IEG mischaracterizes the abstract ideas at issue in these cases.

For example, the USPTO characterizes SmartGene as holding that “comparing new and stored information and using rules to identify options” is an abstract idea. The Federal Circuit’s actual holding was much more specific: that “the claim at issue here involves a mental process excluded from section 101: the mental steps of comparing new and stored information and using rules to identify medical options.” The court itself unambiguously limited the scope of its decision: “[o]ur ruling is limited to the circumstances presented here, in which every step is a familiar part of the conscious process that doctors can and do perform in their heads.” Thus, the USPTO’s characterization removed key aspects of the court’s expressly-limited holding: that the comparing steps were inherently mental steps (not computer steps) performed by a doctor considering medical rules (not any type of rules) to evaluate medical options (not other types of options). The court’s ruling cannot be generalized to all types of comparisons on all types of information using all types of rules. The improper generalization of the court’s holding has resulting in examiners applying SmartGene to find many claims for computer-implemented inventions ineligible. This is because many, if not most, computer processes can be characterized as comparing stored and new information and applying a decision rule to produce a useful result. For example, most automobiles use computers and embedded software to monitor vehicle sensors and take actions. A typical fuel management computer compares a current measure of fuel (new value) with a predefined minimum amount of fuel (stored information) and determines whether to turn on a low fuel light (using rules to identify option). Under the USPTO’s characterization of SmartGene, a claim to such a process would be deemed an abstract idea, an obviously incorrect outcome.

The USPTO did not address any of the problems identified by the public comments regarding non-precedential cases. Instead, the July 2015 Update simply states that the “2014 IEG instructs examiners to refer to the body of case law precedent in order to identify abstract ideas by way of comparison to concepts already found to be abstract,” and makes multiple other references to precedent. Even so, the July 2015 Update relies repeatedly on non-precedential Federal Circuit decisions, such as Dietgoal Innovations LLC v. Bravo Media LLC, Fuzzysharp Technologies Inc. v. Intel Corporation, Federal Home Loan Mortgage Corp. aka Freddie Mac v. Graff/Ross Holdings LLP, Gametek LLC v. Zynga, Inc., Perkinelmer, Inc. v. Intema Limited, and Cyberfone Systems, LLC v. CNN Interactive Group, Inc.

The USPTO should eliminate any discussion of or reliance upon non-precedential decisions. In the alternative, the USPTO should at minimum explain to examiners that such decisions are limited to their specific facts and are not to be generalized into controlling examples or rules.

5. There is No Separate Category for Methods of Organizing Human Activity

Public comments to the 2014 IEG pointed out various issues with the category of “methods of organizing human activities” as a basis of abstract ideas, and in particular requested clarification as to which types of method would fall within the category. Here too there was a broad agreement among the commentators that the proper interpretation of Bilski and Alice: The Court found that the claims in Alice and Bilski were abstract ideas because they were directed to a fundamental economic practice, not because the claims were methods of organizing human activity. The Court noted that Bilski’s claims were methods of organizing human activity only to rebut Alice’s arguments that abstract idea must always be “fundamental truths.” The Court’s analysis does not logically imply that methods of organizing human activity are inherently abstract ideas.

The USPTO responded by broadly interpreting the scope of the category, stating that many different kinds methods of organizing human activity can also be abstract ideas, but providing no explanation (other than examples) to determine when this is the case and when is it not. The USPTO then mapped various Federal Circuit case into this category, even where the court itself did not expressly rely upon such categorization. For example, the USPTO listed buySAFE, DealerTrack, Bancorp, PlanetBingo, Gametex, and Accenture as examples of cases dealing with methods of organizing human activity. However, none of these cases actually held that the methods in suit were methods of organizing human activity. Instead, every single one of these cases held that the claims were abstract as either mental steps or fundamental economic practices. Attempting to map Federal Circuit cases into this category is both confusing to examiners and the public and unnecessary.

The USPTO should remove this category from the Guidance until such time as the Federal Circuit or the Supreme Court provides a clear definition of its bounds.

6. There is No Separate Category for “An Idea of Itself”

Public comments noted that this is catch-all category that the courts have mentioned in passing but have never provided any definition of its contours, and further suggested that the USPTO clarify that this is not a distinct category of abstract ideas.

In response, once again the USPTO broadly described the category and linked various Federal Circuit cases to it as examples, where the court itself never so characterized the invention. The USPTO lists in this category cases the court held to be ineligible in other categories, such as mental steps (Cybersource, Smartgene*, Classen*, Perkinelmer*, Ambry, Myriad CAFC*, Content Extraction); mathematical algorithms (In re Grams, Digitech); and economic activities (Ultramercial) (*indicates non-precedential decision). In fact, there is no precedential Federal Circuit or Supreme Court case that has defined “an idea of itself” as a distinct category. It is only mentioned in dicta, never in a holding.

The result of the USPTO’s categorization of cases into multiple, different undefined categories is to make it more difficult, not less, for examiners to properly determine which types of claims are within which category. Further, where an examiner asserts that a claim falls into multiple categories (which is a simple assertion to make, since most inventions deal with multiple different concepts), the applicant is forced to rebut each categorization.

7. “Mathematical Algorithms” Are Limited to Solutions to Problem in Pure Mathematics

This category, more than any other, reflects the USPTO’s failure to substantively and meaningfully analyze the issues and provide clear guidance. Public comments to the 2014 IEG provided extensive analysis of the case law and the problems arising from mathematical algorithms being considered abstract ideas. The USPTO did not respond to the substantive analysis at all. Instead, the July 2015 Update merely lists cases that have held claims invalid as mathematical algorithms, without explanation. This is inadequate for several reasons.

First, the USPTO must clarify that the presence of a mathematical algorithm in the specification or claims is not a per se indication that the claims are directed to an abstract idea. In Alice, the Court expressly stated that “[o]ne of the claims in Bilski reduced hedging to a mathematical formula, but the Court did not assign any special significance to that fact, much less the sort of talismanic significance petitioner claims.” Equally so, examiners must not assign any special significance to the presence of a mathematical formula either in the disclosure or in the claim. What matters is the underlying concept, not how it is expressed (e.g. “no special significance”), whether in words or mathematical symbols.

Second, the presence of a mathematical formula or equation does not make an invention abstract for a very simple reason: mathematics is a language that allows for the very precise and formal description of certain types of ideas. All modern engineering, including, civil, mechanical, electrical, chemical, computer, etc., as well as all of the physical sciences, relies on mathematical analysis for design and formulation. Using a mathematical equation is simply one—albeit highly precise—way of expressing concepts, which may be either patent-eligible or not. Thus, the presence of a mathematical equation does not by itself imply or suggest anything about the underlying concept, and should not be relied upon by examiners as an automatic evidence of an ineligible abstract idea. While mathematics may be used to describe abstract ideas like the laws of mathematics, it can equally be used to describe entirely mundane and non-abstract ideas like fuel-efficient aircraft approach procedures (U.S. Patent No. 8,442,707), compressing video for transmission on cell phones (U.S. Patent No 8,494,051), efficiently allocating farming resources (U.S. Patent No. 6,990,459), or calculating golf handicaps and the difficulty of golf courses (U.S. Patent No. 8,282,455).

The correct interpretation of “mathematical algorithms” as used by the Supreme Court are algorithms that are solutions to inherently mathematical problems. This was the specific definition used by the Supreme Court in Benson, and confirmed in Diehr. In Benson, the Court stated:

A procedure for solving a given type of mathematical problem is known as an “algorithm.” The procedures set forth in the present claims are of that kind; that is to say, they are a generalized formulation for programs to solve mathematical problems of converting one form of numerical representation to another.

Later, in Diehr, the Court stated that in Benson “we defined ‘algorithm’ as a ‘procedure for solving a given type of mathematical problem,” noting that “our previous decisions regarding the patentability of ’algorithms‘ are necessarily limited to the more narrow definition employed by the Court.” The Court expressly rejected a broader definition that covered any “sequence of formulas and/or algebraic/logical steps to calculate or determine a given task; processing rules.”

The USPTO should clarify that this more limited definition of mathematical algorithms is to be used. This approach beneficially distinguishes between inventions in pure mathematics—which as the Court stated are precisely those that have the disproportionate risk of preemption because they can be used in an unlimited number of different fields—from inventions in applied mathematics, the mathematics used in the engineering and physical sciences. Examiners are well-accustomed by their formal scientific and technical training to distinguish between claims to these two types of inventions making use of mathematical formulas and equations.

8. Identifying Whether a Claim Limitation Recites a Conventional, Routine, and Well-Understood Function of a Computer

The public comments to the 2014 IEG discussed the problems resulting from considering the normal operations of a computer to be merely “generic” functions that are conventional, well-understood, and routine, and therefore by definition insufficient to support eligibility of a patent claim.

In response, the USPTO again ignored the substantive arguments, instead simply stating that examiners may rely on what the courts have recognized as “well understood, routine, and conventional functions” of computers, including “performing repetitive calculations,” “receiving, processing, and storing data,” “receiving or transmitting data over a network”. The July 2015 Update goes on to state that “This listing is not meant to imply that all computer functions are well‐understood, routine and conventional.”

This caveat is hardly sufficient, since the list essentially wipes out all computing operations as they are typically claimed. Just as claims for mechanical processes use verbs and gerunds that describe well-known mechanical operations, so too do claims for computer-based inventions necessarily describe the operations of a computer: receive, transmit, store, retrieve, determine, compare, process, and so forth. There is no other way to claim the operations of a computer except to use such terminology.

Accordingly, since the Supreme Court did not hold that all software and computer-implemented inventions are per se ineligible, the proper interpretation of the Court’s discussion of the generic functions of a computer is more narrowly-focused. Specifically, it is necessary to consider the entirety of each claim limitation, not merely the gerund or verb that introduces a method step. The claim limitation as a whole must recite nothing more than generic functions. When considering computer processing steps on computer data, limitations as to the source of data, the types of data, the operations performed on the data, how the outputs is generated, where the data is stored or transmitted, must all be considered. This is because it is these limitations that distinguish between the merely generic operations (“receiving a data input and determining an output”) and the particular applications.

Categories
Copyright History of Intellectual Property Innovation Inventors Trade Secrets Trademarks Uncategorized

Strong IP Protection Provides Inventors and Creators the Economic Freedom to Create

Here’s a brief excerpt of a post by Terrica Carrington that was published on IPWatchdog.

CPIP went against the grain with this conference, and showed us, bit by bit, what our world might look like today without intellectual property rights. Music wouldn’t sound the same. Movies wouldn’t look the same. You wouldn’t be reading this on your smartphone or have access to the cutting-edge biopharma and healthcare products that you rely on. And some of our greatest artists and inventors might be so busy trying to make ends meet that they would never create the amazing artistic works and inventions that we all enjoy. In short, CPIP explored how intellectual property rights work together as a platform that enables us to innovate, share, and collaborate across industries to develop incredible new products and services at an astounding rate.

To read the rest of this post, please visit IPWatchdog.

Categories
Antitrust Commercialization DOJ High Tech Industry Innovation Inventors Patent Law Patent Licensing Uncategorized

Busting Smartphone Patent Licensing Myths

closeup of a circuit boardCPIP has released a new policy brief, Busting Smartphone Patent Licensing Myths, by Keith Mallinson, Founder of WiseHarbor. Mr. Mallinson is an expert with 25 years of experience in the wired and wireless telecommunications, media, and entertainment markets.

Mr. Mallinson discusses several common myths concerning smartphone patent licensing and argues that antitrust interventions and SSO policy changes based on these myths may have the unintended consequence of pushing patent owners away from open and collaborative patent licensing. He concludes that depriving patentees of licensing income based on these myths will remove incentives to invest and take risks in developing new technologies.

We’ve included the Executive Summary below. To read the full policy brief, please click here.

Busting Smartphone Patent Licensing Myths

By Keith Mallinson

Executive Summary

Smartphones are an outstanding success for hundreds of handset manufacturers and mobile operators, with rapid and broad adoption by billions of consumers worldwide. Major innovations for these—including standard-essential technologies developed at great expense and risk primarily by a small number of companies—have been shared openly and extensively through standard-setting organizations and commitments to license essential patents on “fair, reasonable, and non-discriminatory terms.”

Despite this success, manufacturers seeking to severely reduce what they must pay for the technologies that make their products possible have widely promoted several falsehoods about licensing in the cellular industry. Unsubstantiated by facts, these myths are being used to justify interventions in intellectual property (IP) markets by antitrust authorities, as well as changes to patent policies in standard-setting organizations. This paper identifies and dispels some of the most egregious and widespread myths about smartphone patent licensing:

Myth 1: Licensing royalties should be based on the smallest saleable patent practicing unit (SSPPU) implementing the patented technology, and not on the handset. The SSPPU concept is completely inapplicable in the real world of licensing negotiations involving portfolios that may have thousands of patents reading on various components, combinations of components, entire devices, and networks. In the cellular industry, negotiated license agreements almost invariably calculate royalties as a percentage of handset sales prices. The SSPPU concept is inapplicable because it would not only be impractical given the size and scope of those portfolios, but it would not reflect properly the utility and value that high-speed cellular connectivity brings to bear on all features in cellular handsets.

Myth 2: Licensing fees are an unfair tax on the wireless industry. License fees relate to the creation—not arbitrary subtraction—of value in the cellular industry. They are payments for use of essential patented technologies, developed at significant cost by others, when an implementer chooses to produce products made possible by those technologies. The revenue generated by those license fees encourages innovation, and is directly related to the use of the patented technologies.

Myth 3: Licensing fees and cross-licensing diminish licensee profits and impede them from investing in their own research and development (R&D). Profits among manufacturers are determined by competition among them, including differences in pricing power and costs. Core-technology royalty fees, which are charged on a non-discriminatory basis and are payable by all implementers, are not the cause of low profitability by some manufacturers while others are very profitable. Cross-licensing is widespread: It provides in-kind consideration, which reduces patent-licensing costs and incentivizes R&D.

Myth 4: Fixed royalty rates ignore the decreasing value of portfolio licenses as patents expire. Portfolio licensing is the norm because it is convenient and cost efficient for licensor and licensee alike. All parties know the composition of the portfolio will change as some patents expire and new patents are added. Indeed, this myth is particularly fanciful given that the number of new patents issued greatly exceeds the number that expires for the major patentees. In fact, each succeeding generation of cellular technology has represented and will continue to represent a far greater investment in the development of IP than the prior generation.

Myth 5: Royalty charges should be capped so they do not exceed figures such as 10% of the handset price or even well under $1 per device. There is no basis for arbitrary royalty caps. It is not unusual for the value of IP to predominate as a proportion of total selling prices, in books, CDs, DVDs, or computer programs. Market forces—not arbitrary benchmarks wished for or demanded by vested interests and which do not reflect costs, business risks, or values involved—should also be left to determine how costs and financial rewards are allocated in the cellular industry with smartphones.

Categories
Commercialization Innovation Inventors Patent Licensing Uncategorized

Google’s Patent Starter Program: What it Really Means for Startups

The following guest post comes from Brad Sheafe, Chief Intellectual Property Officer at Dominion Harbor Group, LLC.

By Brad Sheafe

Recalling its rags-to-riches story of two guys with nothing but a great idea, a garage, and a hope of making the world a better place, Google recently announced its new Patent Starter Program. As part of its commitment to the culture from which it came, Google claims that it simply wants to help startups navigate the patent landscape by assigning them certain patents while it receives a license back. It describes the situation as follows:

The world of patents can be very confusing, cumbersome and often distracting for startups. All too often these days, the first time a startup has to deal with a patent issue is when a patent troll attacks them. Or when a prospective investor may ask them how they are protecting their ideas (“You don’t have any patents???”). These problems are the impetus behind the Patent Starter Program[.]

There are of course many tendentious assertions here – from the well-established definitional problems with the use of the pejorative term “patent troll,” which is often used to attack startups, to the untrue statement that patents are “distracting” for startups (which is false, as any person who watches Shark Tank knows). But we will not go over this well-tread territory here. For our purposes, this statement is notable because it is couched entirely in terms of a desire to help other tech startups. But when one looks at the specific details of the Patent Starter Program (PSP), it’s quite clear that it is designed to benefit Google as well – perhaps even most of all.

On its face, the PSP is advertised as an opportunity for the first 50 eligible participants (“startups or developers having 2014 Revenues between US $500,000 and US $20,000,000”) to select 2 families from Google’s patent portfolio out of an offering of between 3-5 families of Google’s choosing. These families are intended to be broadly relevant to the participant’s business, but Google makes no guarantee that they will be, and there is no “re-do” if the participant doesn’t like what Google offers the first time.

In exchange for access to these patents, many are not paying attention to the fine print that creates some significant contractual restrictions on anyone who uses the PSP. First and foremost, the patents cannot be used to initiate a lawsuit for infringement. They can be used only “defensively,” that is, if the participant is sued for infringement first. In fact, if a participant does choose to assert the supposedly-owned patent rights outside of Google’s terms, the Patent Purchase Agreement punishes the startup by requiring “additional payments” to be made to Google.

The boilerplate text of the Agreement states that this additional payment will be $1 million or more! Although specific payments may end up varying from this based on the negotiating tactics of the startups who make use of the PSP, the punitive nature of this payment is clear. For an undercapitalized startup that is just starting out in the marketplace and perhaps still living on the life support provided by venture capitalists, a $1+ million payment is a monumental charge to write down. This is especially the case if the startup is simply exercising a valid legal right that is integral to all property ownership – the right to keep others from trespassing on one’s property.

Additionally, participants in the PSP must also join the LOT Network (LOT stands for “License on Transfer”), which presents itself as a cross-licensing network committed to reducing the alleged “PAE problem.” Members of the LOT Network must “grant a portfolio-wide license to the other participants” in the LOT Network, but “the license becomes effective ONLY when the participant transfers one or more patents to an entity other than another LOT Network participant, and ONLY for the patent(s) actually transferred.”

On its face, this might still seem a reasonable concession for the “free” acquisition of some of Google’s patents. But the fine print makes it clear that there are additional burdens agreed to by the startup. First, the LOT Network agreement includes all of the participant’s patents, and not just those it acquires from Google. Second, even if one decides later to withdraw from the LOT Network, the agreement explicitly states that all of the patents owned by the participant at the time of withdrawal will continue to remain subject to the terms of the LOT agreement. The LOT Network thus operates in much the same way Don Corleone viewed membership in the “family” – people are welcome in on certain non-negotiable terms, and good luck ever getting out.

These all add up to be incredibly onerous and surprising restrictions on startups, which often need flexibility in the marketplace to adopt their business models. But as the old, late-night television commercials used to say, “But wait, there’s more!” If the terms and conditions of the LOT Network seem highly limiting on the rights associated with patent ownership and overly broad in terms of who gets a license to the applicant’s patents, there’s an even greater surprise in the license-back provisions of Google’s Patent Purchase Agreement. Once one wades through the legalese, it becomes clear that while a participant in the PSP and LOT Network nominally owns the patents granted by Google, these patents are effectively licensed to everyone doing anything.

There is substantial legalese here that is clearly “very confusing, cumbersome and . . . distracting for startups,” the very charge leveled by Google against the patent system as the justification for the PSP and LOT Network. We’ll break it all down in a moment, but here’s the contractual language that creates this veritable universal license. The agreement gives Google, its “Affiliates” (defined to include any “future Affiliates, successors and assigns”), and its “Partners” (defined as “all agents, advisors, attorneys, representatives, suppliers, distributors, customers, advertisers, and users of [Google] and/or [Google] Affiliates”) a license to the patents Google grants to the participant if the participant were ever to allege infringement by any of these partners through their use of any of Google’s “Products” (defined as “…all former, current and future products, including but not limited to services, components, hardware, software, websites, processes, machines, manufactures, and any combinations and components thereof, of [Google] or any [Google] Affiliates that are designed, developed, sold, licensed, or made, in whole or substantial part, by or on behalf of that entity”).

So let’s review: A startup can acquire some patents from Google, but only from the handful of patents that Google itself picks out (which may or may not relate to the participant’s business). The startup must agree to an incredibly broad license-back provisions and promise not to assert any ownership rights (unless the participant gets sued first) on penalty of $1+ million payment to Google. And the startup is bound to join the LOT Network, where Google execs are on the Board of Directors, which further reduces the rights not only in the patents granted by Google, but in the startup’s entire portfolio of patents, including most importantly patents not acquired from Google.

To be fair, Google is far from the only large corporation to take advantage of its size and financial strength to mold public perception, markets, and even government policy to its liking. Some might even turn a blind eye, calling it “good business” and accepting such behavior as the price we all must pay for the products and services that established corporations like Google offer. To some extent, there is some truth in this – most of us use Google’s services every day and many of us working in the innovation industries continue to be impressed with its innovative approach to those services and its products.

When it comes to the underpinnings of the innovation economy – the startups that drive economic growth and the patent system that provides startups with legal and financial security against established market incumbents (again, as any episode of Shark Tanks makes clear) – the restrictive contractual conditions in the PSP and LOT Network give one pause. After all, Google began as a startup relying on fully-licensable IP, despite the fact that Google apparently wants us all to forget about its founding page-rank patent (Patent No. 6,285,999, filed on January 9, 1998). One will search in vain in Google’s corporate history website, for instance, for evidence of Larry Page’s patent. Yet it’s well-established that Google touted this “patent-pending” search technology in its announcement in 1999 that it had received critical venture-capital funding.

The next Google is out there, counting on the same patent rights to be in place for it to rely upon just as Google did in the late 1990s. Instead of making every effort to collapse the very structure on which its success was built, shouldn’t Google be the first to defend it? Competition will always be the greatest motivator for those who have what it takes to compete – and with its balance sheet and world-renowned collection of bright, inventive minds, Google should not be afraid of competition. Or worse, give the appearance of promoting competition and then use that appearance to dupe potentially competitive startups into emasculating the intellectual property those startups need to actually compete.

So, if Google and its far-flung business partners in the high-tech sector want to support startups on terms that are reasonable for both the startup and Google given their relative positions, there is certainly nothing wrong with this. But, Google shouldn’t hide behind the bugaboos of “patent trolls” and the supposed “complexity” of a patent system designed to benefit small innovators in order to drive a largely one-sided partnership while hiding behind confounding legalese that certainly does not match its feel-good rhetoric to startups, to Congress, or to the public.

If an established company wants to support innovation by providing worthy startups with the stepping stones they need for success, then go for it! Everyone should be 100% behind that concept – but that is not what Google’s PSP or the LOT Network represent. These aren’t stepping stones to successful innovation, but rather they are deliberately fashioned and enticingly placed paving stones that lead to the shackling of startups with terms and covenants that give the appearance of ownership but strip away the very rights that make that ownership meaningful – and all the while Google benefits both from the relationship and the public perception of munificence. When one is using someone else’s idea, one should compensate them for it, and the nature of the license and the compensation should certainly match what one is saying publicly about this agreement.

All we can ask, Google, is that you treat others as you were treated in the past as a startup, and now approximately fifteen years later as a market incumbent just, well, Don’t Be Evil.

Categories
Innovation Legislation Patent Law Uncategorized

Will Increasing the Term of Data Exclusivity for Biologic Drugs in the TPP Reduce Access to Medicines?

The following guest post comes from Philip Stevens, Director of the Geneva Network, a research and advocacy organization working on international health, trade, and intellectual property issues. The original research note can be found here.

By Philip Stevens

scientist looking through a microscopeIn the Trans-Pacific Partnership (TPP) negotiations, the U.S. and Japan have proposed that TPP partners increase their period of regulatory data protection (RDP) for biologic medicines to align with practice in other countries. These proposals have been strongly opposed by a number of academics, who claim that such a move would significantly increase public spending on medicines, thereby potentially limiting access.[1], [2]

Past experiences in Canada and Japan, which lengthened their respective terms of RDP some years ago, however, indicate that these fears of budget increases are unlikely to materialise.

Canada and Japan increased RDP[i] substantially but did not experience increases in expenditures for medicines

Like several TPP countries, the governments of Canada and Japan have national health insurance systems, and cover most health care costs, including medicines. Unlike other TPP countries, Canada and Japan have in the past decade adopted substantially longer terms of RDP. Their experiences, captured in the data provided below, show that expenditures on medicines did not change appreciably from previous trends.

In 2006 Canada changed its regulations in a way that effectively increased their RDP term from 0 years to 8 years.[ii] As shown in Figure 1 (based on 2014 OECD data[iii]), pharmaceutical spending as a percentage of total health spending has actually decreased since then.

Figure 1: Pharmaceutical expenditure as a percentage of Canada’s healthcare expenditure (2005-2011)

Canada - OECD Health Data 2014. Pharma spend as % of total health spend. 2005: 17.2. 2006: 17.4 (Note: RDP Increased). 2007: 17.2. 2008: 17.0. 2009: 17.0. 2010. 16.6. 2011: 17.1.

As indicated in Figure 2 below, over the same period (2005-2011) pharmaceutical expenditure as a percentage of GDP (blue bars) remained relatively stable after RDP was increased in Canada in 2006, whereas overall health spending as a percentage of GDP in Canada has gradually increased (red bars).

Figure 2: Health and pharmaceutical expenditure as a percentage of Canada’s GDP (2005-2011)

Canada - OECD Health Data 2013. Red=Health spend as % of GDP. Blue=Pharma spend as % of GDP. 2005: Red, 9.8; Blue, 1.69. 2006: Red: 10.1; Blue 1.73. 2007: Red: 10.0; Blue: 1.73. 2008: Red: 10.3; Blue: 1.74. 2009: Red: 11.4; Blue: 1.93. 2010: Red: 11.4; Blue: 1.89. 2011: Red, 11.2; Blue: 1.86.

Similarly, Japan increased data protection in 2007 from 6 to 8 years (effectively 9 years).[iv] As indicated by Figure 3, fluctuations in expenditures after that time have been in line with growth in health care spending as a percentage of GDP. In fact, in 2010 pharmaceutical spending decreased in a year where health care spending increased.

Figure 3: Pharmaceutical expenditure as a percentage of Japan’s health care expenditure (2005-2010)

Japan - OECD Health Data 2014. Pharma spend as % of total health spend. 2005: 19.7. 2006: 19.5. 2007: 19.9 (Note: RDP Increased). 2008: 19.7. 2009: 20.7. 2010. 20.3.6. 2011: 20.8.

Figure 4 shows that the gradual increases in pharmaceutical expenditure as a percentage of GDP in Japan between 2005 and 2010 (blue bars) was in line with the overall increase in health spending as a percentage of GDP in Japan over the same period (red bars).

Figure 4: Health and pharmaceutical expenditure as a percentage of Japan’s GDP (2005-2010)

Japan - OECD Health Data 2013. Red=Health spend as % of GDP. Blue=Pharma spend as % of GDP. 2005: Red, 8.2; Blue, 1.62. 2006: Red: 8.2; Blue 1.60. 2007: Red: 8.2 (Note: RDP increased); Blue: 1.63. 2008: Red: 8.6; Blue: 1.70. 2009: Red: 9.5; Blue: 1.97. 2010: Red: 11.4; Blue: 1.89. 2011: Red, 9.6; Blue: 1.94.

Conclusion

The past experiences of Canada and Japan described above indicate that increases in RDP terms do not result in meaningful increases in health care expenditures or expenditures on medicines relative to overall health care spending. There could be many explanations for this result, ranging from changes in procurement policies, to increases in the number of medicines whose patent terms have expired. The evidence presented above, however, suggests that those concerned about access to medicines and the financial sustainability of public healthcare systems should focus their attention on policies other than Regulatory Data Protection for medicines.


[1] Moir et al, (2014) “Proposals for extending data protection for biologics in the TPPA: Potential consequences for Australia”, Submission to the Department of Foreign Affairs and Trade, available at http://dfat.gov.au/trade/agreements/tpp/submissions/Documents/tpp_sub_gleeson_lopert_moir.pdf

[2] Gleeson, D, Lopert, R, and Reid, P, (2013), “How the Trans Pacific Partnership Agreement could undermine PHARMAC and threaten access to affordable medicines and health equity in New Zealand”, Health Policy, 116:2-3

[i] Japan has a “post marketing surveillance system” which we consider a surrogate for RDP and use the term RDP in this paper to include Japan’s approach.

[ii] Canada’s 5-year data protection term was made ineffective by a 1998 Federal Court interpretation of regulations. Bayer Inc. v. Canada (Attorney General), 84 C.P.R. (3d) 129, aff’d 87 C.P.R. (3d) 293, leave to appeal to SCC refused, [1999] S.C.C.A. No. 386. The Federal Court held that RDP protection in Canada was not triggered if a generic applicant could demonstrate bioequivalence without requiring the Health Minister to consult the data submitted by the innovative company. Because that was a common occurrence, RDP rarely applied under the pre-2006 regulations.

[iii] 2013/14 OECD data on Canada and Japan is found at: http://www.oecd-ilibrary.org/social-issues-migration-health/health-key-tables-from-oecd_20758480;jsessionid=k26q30wbgljb.x-oecd-live-02.

[iv] Japan’s system prevents filing applications for follow-on approval for eight years after the innovator’s approval. An additional year after that is required for the regulatory approval process to conclude.

Categories
Commercialization Copyright Copyright Licensing History of Intellectual Property Innovation Internet Legislation Uncategorized

Making Copyright Work for Creative Upstarts

The following post is by CPIP Research Associate Matt McIntee, a rising 2L at George Mason University School of Law. McIntee reviews a paper from CPIP’s 2014 Fall Conference, Common Ground: How Intellectual Property Unites Creators and Innovators.

By Matt McIntee

cameraIn Making Copyright Work for Creative Upstarts, recently published in the George Mason Law Review, Professor Sean Pager demonstrates how the current copyright system can be improved to better support creative upstarts. Pager defines “creative upstarts” to include “independent creators and producers who (a) are commercially-motivated; (b) operate largely outside the rubric of the mainstream commercial content industries; and (c) therefore lack the kind of copyright-related knowledge, resources, and capabilities that mainstream players take for granted.” Though these upstarts depend on their copyrights to make a living, they often find it difficult to effectively navigate the copyright system.

Pager explains how the copyright system generally benefits sophisticated users. For example, the Copyright Act contains hyper-technical language that can be difficult for naïve users to traverse. Pager pilots through this strikingly complex legal regime and determines that there are ample opportunities to afford better copyright protection to creative upstarts without diluting the copyrights held by others. He offers several proposals geared towards protecting the interests of creative upstarts, and he explains how the copyright system was designed without these features in mind.

One of Pager’s proposals is that we lower copyright registration costs, which potentially deter creative upstarts from registering their works. He notes that registration, obtaining accurate copyright information, and clearing copyrights are among the chief costs associated with obtaining copyright protection. A $35 registration fee may seem insignificant due to the benefits that come with it, but these costs can add up quickly for creative upstarts who generate large volumes of works. For example, graphic artists typically create many original works in order to build their portfolios, and the registration costs could be prohibitive.

Pager also notes that the Copyright Office’s searchable database increases costs for creative upstarts by adding valuable time to the process. The database is supposed to be complete and catalogued so that persons can easily search for accurate copyright information, but unfortunately this is not always the case. As a result, many creative upstarts have to spend precious time sifting their way through incomplete records and clearing copyrights instead of spending their time creating.

Tracing the history of the current regime, Pager explains how the copyright system assumes that artists seeking copyright protection have ample resources, such as lawyers, production facilities, manufacturers, and money. When the system was designed, policymakers structured it to support a “capital intensive process” that required significant investment and risk. But as Pager notes, the industry has shifted, and creative upstarts now form the bulk of content creators. A copyright system designed for artists recording on 8-track tapes is no longer appropriate in the digital age.

Pager offers a number of incremental steps to reform copyright law with the goal of making it more favorable to creative upstarts while still protecting the other players in the field. Though he acknowledges that there is no “magic bullet” solution, Pager argues that “improvements must come through a combination of substantive, procedural, and institutional reforms that yield incremental improvements across the entire copyright system.” And with such a comprehensive approach, he notes that certain tradeoffs will have to be made.

Substantively, Pager discusses how reducing systemic complexity is “deceptively simple.” While replacing “fuzzy standards with bright-line rules” would to some degree enhance certainty, Pager notes that “bright lines quickly become blurred” in a “world of fast-changing technologies and business practices.” He proposes instead that a “more realistic fallback goal would be to couple open-ended standards with clear safe harbor provisions or explicit examples.” Under this system, “standards would have room to evolve” while “their core meaning would be anchored as a starting point.”

Regarding procedural reforms, Pager suggests a “small claims dispute resolution” mechanism to drastically reduce costs for creative upstarts by providing them with a quick way to pursue infringement claims. Right now, copyright claims are exclusively within the jurisdiction of the federal district courts, an impractical and expensive route for independent artists. The Copyright Office has put forth a proposal for such a mechanism, but Pager argues that there is a “fatal flaw” since the process “would only be available on a voluntary basis.” By allowing “better-resourced adversaries” to opt out, the Office’s proposal leaves creative upstarts vulnerable.

Pager proposes that the Section 512 notice-and-takedown procedures could be improved to better support creative upstarts. Currently, creators are burdened by both the number of takedown notices required and the lack of access to the “trusted sender” facilities available to major participants. As Pager notes, the House Judiciary Committee addressed these issues as recently as March of 2014, but questions remain concerning who will bear the costs and how the transition will be implemented.

Turning to the registration system, Pager suggests three reforms that would benefit creative upstarts. First, having a single registry for authors to register their works, rather than a multitude of public and private registries, would reduce administrative burdens. Second, registration records would be more efficiently maintained through a tiered-fee system that charges more to larger content creators in order to subsidize the costs of smaller upstarts. Lastly, removing the timely registration requirement for enhanced damages, coupled with small claims dispute resolution reform, would provide cost-effective enforcement mechanisms.

Finally, Pager explains how technology can play a pivotal role in helping creative upstarts. One example is updating the Copyright Office website to provide more basic information about the copyright system. This information is currently scattered all over the Internet, and it could be organized to make it more user-friendly and less “lawyerly.” Another example is implementing software similar to TurboTax that actively assists authors when registering their copyrights. There would first have to be substantive changes in the law to allow for such software, but Pager believes that this technology would be incredibly helpful to those navigating the registration system.

Creative Upstarts is a fascinating look into the world of creative upstarts. With their interests and the interests of the larger copyright ecosystem in mind, Pager skillfully traverses our complicated copyright regime and identifies ample opportunities to improve copyright protections for creative upstarts. The twenty-first century is a digital age, and creators and innovators have the technological ability to produce creative works right on their laptops. Pager’s hope is the Copyright Act will be updated to address the realities of this modern world for creative upstarts.