Categories
Copyright Internet Legislation Uncategorized

Separating Fact from Fiction in the Notice and Takedown Debate

By Kevin Madigan & Devlin Hartline

U.S. Capitol buidlingWith the Copyright Office undertaking a new study to evaluate the impact and effectiveness of the Section 512 safe harbor provisions, there’s been much discussion about how well the DMCA’s notice and takedown system is working for copyright owners, service providers, and users. While hearing from a variety of viewpoints can help foster a healthy discussion, it’s important to separate rigorous research efforts from overblown reports that offer incomplete data in support of dubious policy recommendations.

Falling into the latter category is Notice and Takedown in Everyday Practice, a recently-released study claiming to take an in-depth look at how well the notice and takedown system operates after nearly twenty years in practice. The study has garnered numerous headlines that repeat its conclusion that nearly 30% of all takedown requests are “questionable” and that echo its suggestions for statutory reforms that invariably disfavor copyright owners. But what the headlines don’t mention is that the study presents only a narrow and misleading assessment of the notice and takedown process that overstates its findings and fails to adequately support its broad policy recommendations.

Presumably released to coincide with the deadline for submitting comments to the Copyright Office on the state of Section 512, the authors claim to have produced “the broadest empirical analysis of the DMCA notice and takedown” system to date. They make bold pronouncements about how “the notice and takedown system . . . meets the goals it was intended to address” and “continues to provide an efficient method of enforcement in many circumstances.” But the goals identified by the authors are heavily skewed towards service providers and users at the expense of copyright owners, and the authors include no empirical analysis of whether the notice and takedown system is actually effective at combating widespread piracy.

The study reads more like propaganda than robust empiricism. It should be taken for what it is: A policy piece masquerading as an independent study. The authors’ narrow focus on one sliver of the notice and takedown process, with no analysis of the systemic results, leads to conclusions and recommendations that completely ignore the central issue of whether Section 512 fosters an online environment that adequately protects the rights of copyright owners. The authors conveniently ignore this part of the DMCA calculus and instead put forth a series of proposals that would systematically make it harder for copyright owners to protect their property rights.

To its credit, the study acknowledges many of its own limitations. For example, the authors recognize that the “dominance of Google notices in our dataset limits our ability to draw broader conclusions about the notice ecosystem.” Indeed, over 99.992% of the individual requests in the dataset for the takedown study were directed at Google, with 99.8% of that dataset directed at Google Search in particular. Of course, search engines do not include user-generated content—the links Google provides are links that Google itself collects and publishes. There are no third parties to alert about the takedowns since Google is taking down its own content. Likewise, removing links from Google Search does not actually remove the linked-to content from the internet.

The authors correctly admit that “the characteristics of these notices cannot be extrapolated to the entire world of notice sending.” A more thorough quantitative study would include data on sites that host user-generated content, like YouTube and Facebook. As it stands, the study gives us some interesting data on one search engine, but even that data is limited to a sample size of 1,826 requests out of 108 million over a six-month period in mid-2013. And it’s not even clear how these samples were randomized since the authors admittedly created “tranches” to ensure the notices collected were “of great substantive interest,” but they provide no details about how they created these tranches.

Despite explicitly acknowledging that the study’s data is not generalizable, the authors nonetheless rely on it to make numerous policy suggestions that would affect the entire notice and takedown system and that would tilt the deck further in favor of infringement and against copyright owners. They even identify some of their suggestions as explicitly reflecting “Public Knowledge’s suggestion,” which is a far cry from a reasoned academic approach. The authors do note that “any changes should take into account the interests of . . . small- and medium-sized copyright holders,” but this is mere lip service. Their proposals would hurt copyright owners of all shapes and sizes.

The authors justify their policy proposals by pointing to the “mistaken and abusive takedown demands” that they allegedly uncover in the study. These so-called “questionable” notices are the supposed proof that the entire notice and takedown system needs fixing. A closer look at these “questionable” notices shows that they’re not nearly so questionable. The authors claim that 4.2% of the notices surveyed (about 77 notices) are “fundamentally flawed because they targeted content that clearly did not match the identified infringed work.” This figure includes obvious mismatches, where the titles aren’t even the same. But it also includes ambiguous notices, such as where the underlying work does not match the title or where the underlying page changes over time.

The bulk of the so-called “questionable” notices comes from those notices that raise “questions about compliance with the statutory requirements” (15.4%, about 281 notices) or raise “potential fair use defenses” (7.3%, about 133 notices). As to the statutory requirements issue, the authors argue that these notices make it difficult for Google to locate the material to take down. This claim is severely undercut by the fact that, as they acknowledge in a footnote, Google complies with 97.5% of takedown notices overall. Moreover, it wades into the murky waters of whether copyright owners can send service providers a “representative list” of infringing works. Turning to the complaint about potential fair uses, the authors argue that copyright owners are not adequately considering “mashups, remixes, or covers.” But none of these uses are inherently fair, and there’s no reason to think that the notices were sent in bad faith just because someone might be able to make a fair use argument.

The authors claim that their “recommendations for statutory reforms are relatively modest,” but that supposed modesty is absent from their broad list of suggestions. Of course, everything they suggest increases the burdens and liabilities of copyright owners while lowering the burdens and liabilities of users, service providers, and infringers. Having overplayed the data on “questionable” notices, the authors reveal their true biases. And it’s important to keep in mind that they make these broad suggestions that would affect everyone in the notice and takedown system after explicitly acknowledging that their data “cannot be extrapolated to the entire world of notice sending.” Indeed, the study contains no empirical data on sites that host user-generated content, so there’s nothing whatsoever to support any changes for such sites.

The study concludes that the increased use of automated systems to identify infringing works online has resulted in the need for better mechanisms to verify the accuracy of takedown requests, including human review. But the data is limited to small surveys with secret questions and a tiny fraction of notices sent to one search engine. The authors offer no analysis of the potential costs of implementing their recommendations, nor do they consider how it might affect the ability of copyright owners to police piracy. Furthermore, data presented later in the study suggests that increased human review might have little effect on the accuracy of takedown notices. Not only do the authors fail to address the larger problem of whether the DMCA adequately addresses online piracy, their suggestions aren’t even likely to address the narrower problem of inaccurate notices that they want to fix.

Worse still, the study almost completely discards the ability of users to contest mistaken or abusive notices by filing counternotices. This is the solution that’s already built into the DMCA, yet the authors inexplicably dismiss it as ineffective and unused. Apart from providing limited answers from a few unidentified survey respondents, the authors offer no data on the frequency or effectiveness of counternotices. The study repeatedly criticizes the counternotice system as failing to offer “due process protection” to users, but that belief is grounded in the notion that a user that fails to send a counternotice has somehow been denied the chance. Moreover, it implies a constitutional right that is not at issue when two parties interact in the absence of government action. The same holds true for the authors’ repeated—and mistaken—invocation of “freedom of expression.”

More fundamentally, the study ignores the fact that the counternotice system is stacked against copyright owners. A user can simply file a counternotice and have the content in question reposted, and most service providers are willing to repost the content following a counternotice because they’re no longer on the hook should the content turn out to be infringing. The copyright owner, by contrast, then faces the choice of allowing the infringement to continue or filing an expensive lawsuit in federal court. The study makes it sound like users are rendered helpless because counternotices are too onerous, but the reality is that the system leaves copyright owners practically powerless to combat bad faith counternotices.

Pretty much everyone agrees that the notice and takedown system needs a tune up. The amount of infringing content available online today is immense. This rampant piracy has resulted in an incredible number of takedown notices being sent to service providers by copyright owners each day. Undoubtedly, the notice and takedown system should be updated to address these realities. And to the extent that some are abusing the system, they should be held accountable. But in considering changes to the entire system, we should not be persuaded by biased studies based on limited (and secret) datasets that provide little to no support for their ultimate conclusions and recommendations. While it may make for evocative headlines, it doesn’t make for good policy.

Categories
Administrative Agency Economic Study FTC Innovation Inventors Law and Economics Legislation Uncategorized

Acknowledging the Limitations of the FTC’s PAE Study

dictionary entry for the word "innovate"The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacy emphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

Categories
Administrative Agency Copyright Uncategorized

A New Librarian of Congress and a New Copyright Office

By Sandra Aistars

U.S. Capitol buildingWith the Senate considering the confirmation of Dr. Carla Hayden as the next Librarian of Congress, I have joined thirteen other intellectual property law professors in an Open Letter suggesting that her confirmation should serve as an important reminder that the U.S. Copyright Office, a department within the Library of Congress, needs to be updated so that it can best serve the needs of the American people in the digital age.

As President Obama stated in his announcement of the nomination, “Dr. Hayden has devoted her career to modernizing libraries so that everyone can participate in today’s digital culture.” Indeed, I share Dr. Hayden’s enthusiasm for modernizing our libraries so that they can better promote and preserve our nation’s collective learning and culture for all to enjoy. This is especially important when it comes to the Library of Congress, the oldest and largest cultural institution in the United States.

However, it’s also important to provide the Copyright Office with the resources and authority it needs to perform its role as the nation’s leading expert on copyright law and a major facilitator of the marketplace for creative and scholarly works. Members of Congress and numerous stakeholders have been calling for modernization of the Copyright Office for years. They recognize that the Office faces significant structural challenges that limit its effectiveness in serving the American public. As the Office notes in its recent strategic plan, “there is no question that it must now modernize to meet current and future needs.”

Even though the Copyright Office and the Library of Congress have significantly different business needs, the Office is stuck using the Library’s information technology (IT) systems, which are not only antiquated, but were designed to serve a different purpose. A recent report by the Government Accountability Office finds that “serious weaknesses” in the Library’s IT systems are thwarting the Office’s ability to fulfill its mission in registering, recording, and depositing copyrighted works. The Office needs funding to improve these essential IT systems and to tailor them to its specific needs.

The legal structure of the Copyright Office should also be updated to best position it for success. There is currently no agency with independent, comprehensive rulemaking authority when it comes to copyright law. The Register of Copyrights can only establish regulations for administering the duties and functions of the Copyright Office, subject to approval by the Librarian of Congress. This inefficient structure burdens the Librarian with responsibilities that more appropriately should fall on the Register—the fully-vetted, leading expert in the Copyright Act.

Moreover, the Register of Copyrights is not appointed as a principal officer of the United States by the President. This means that should the Office take on additional duties like administering a small copyright claims court, the Register could not appoint and oversee the administrative law judges that would run this process. Likewise, the Register currently has no power over the three judges of the Copyright Royalty Board that determine the rates and terms of statutory copyright licenses. Congress has consistently assigned the Office more duties and relied upon its vast proficiency in copyright law, but the authority of the Register has not been updated accordingly.

Although I and the other signatories of the Open Letter do not all agree on every issue in copyright law, including where the Copyright Office should be positioned within our government, we do all agree that the Register of Copyrights should be a principal officer of the government. Only when the Register is appointed by the President and confirmed by the Senate will she have the authority and accountability appropriate to administer our nation’s copyright system.

The confirmation of Dr. Hayden as the new Librarian of Congress promises us the much-needed modernization of the Library of Congress, and it also provides us with the opportunity to reflect upon the needs of the Copyright Office. Both the Librarian and the Register play important roles in promoting and preserving our nation’s culture, but those roles are fundamentally different in nature. These two talented leaders should work closely with each other as partners, each possessing the authority and responsibility suitable for their differing areas of expertise.

The time to modernize the Copyright Office is now. Congress can ensure the continuing success of our copyright system by providing the Office with the resources and authority it needs to better serve us all in the digital age.

To read the full Open Letter, please click here.

Categories
Copyright Internet Legislation Uncategorized

Middle Class Artists Want a DMCA System That Works

The following guest post comes from Rebecca Cusey, a second year law student at George Mason University School of Law.

By Rebecca Cusey

Rebecca_Cusey_HeadshotMason Law’s Arts & Entertainment Advocacy Clinic filed comments today with the U.S. Copyright Office detailing the frustrations and futilities experienced by everyday artists as they struggle with the DMCA system to protect their copyrights online.

Terrica Carrington and I wrote the comments on behalf of middle class artists, that group of creative professionals who invests in and lives off their art and yet does not have the kind of revenue stream or corporate backing of more well-known artists. These photographers, filmmakers, musicians, and other artists are squeezed between infringement that directly affects their ability to pay for things like a mortgage or orthodontics bill and the exorbitant cost of using the notice and takedown system to fight infringement.

Terrica and I spoke with four artists: Filmmaker Ellen Seidler, news photographer Yunghi Kim, musician Blake Morgan, audiovisual creator David Newhoff. These artists make works of value and have followings, and thus infringement. They make a profession of their art.

A middle class artist must do it all on her own – find infringement by hours of searching the web, compile lists of infringing posts on each site, navigate each site’s confusing DMCA notification system, and send takedown notification after takedown notification. And that’s all just sending the notifications. Monitoring to see if the infringing content has been removed or if it has simply been uploaded in another spot is a whole other job in itself.

The artists with whom we talked said it was not unusual in the least for a song, photograph, or film to be posted illegally in a thousand places, even tens of thousands of places. Finding infringement and sending notices took hundreds and thousands of hours, hours they could have spent taking photographs, making movies, or writing songs.

After all the time spent fighting infringement online, they felt the task was futile because the content simply reappeared, sometimes in a different place on the same site, other times because of counternotices filed with the ISP hosting the content claiming to have the right to post it.

These artists felt the notice and takedown system mandated by Section 512 of the Copyright Act was both all-consuming and futile, all-consuming because it ate hours upon hours and futile because it yielded little to no results. Ultimately, all of them decided to stop spending time trying to enforce their copyrights under the procedures of Section 512. It simply was not worth it.

Our comments were filed in response to a request by the U.S. Copyright Office for comments on the effectiveness of Section 512 in fighting infringement online. The Copyright Office wanted to know in particular if the provisions of Section 512 balanced the needs of ISPs to host content with the needs of copyright owners to control their work.

Middle class artists feel the balance is off and the scale tipped in favor of ISPs. These artists do not object to bearing some responsibility for protecting their copyrights online. They simply want a system that works.

To read our Section 512 comments, please click here.

Categories
Copyright Internet Legislation Uncategorized

Copyright Scholars: Courts Have Disrupted the DMCA’s Careful Balance of Interests

Washington, D.C. at nightThe U.S. Copyright Office is conducting a study of the safe harbors under Section 512 of the DMCA, and comments are due today. Working with Victor Morales and Danielle Ely from Mason Law’s Arts & Entertainment Advocacy Clinic, we drafted and submitted comments on behalf of several copyright law scholars. In our Section 512 comments, we look at one narrow issue that we believe is the primary reason the DMCA is not working as it should: the courts’ failure to properly apply the red flag knowledge standard. We argue that judicial interpretations of red flag knowledge have disrupted the careful balance of responsibilities Congress intended between copyright owners and service providers. Instead of requiring service providers to take action in the face of red flags, courts have allowed them to turn a blind eye and bury their heads in the sand.

Whether Section 512’s safe harbors are working as intended is a hotly contested issue. On the one hand, hundreds of artists and songwriters are calling for changes “to the antiquated DMCA which forces creators to police the entire internet for instances of theft, placing an undue burden on these artists and unfairly favoring technology companies and rogue pirate sites.” On the other hand, groups like the Internet Association, which includes tech giants such as Google and Facebook, claim that the safe harbors are “working effectively” since they “strike a balance between facilitating free speech and creativity while protecting the interests of copyright holders.” The Internet Association even claims that “the increasing number of notice and takedown requests” shows that the DMCA working.

Of course, it’s utter nonsense to suggest that the more takedown notices sent and processed, the more we know the DMCA is working. The point of the safe harbors, according to the Senate Report on the DMCA, is “to make digital networks safe places to disseminate and exploit copyrighted materials.” The proper metric of success is not the number of takedown notices sent; it’s whether the internet is a safe place for copyright owners to disseminate and exploit their works. The continuing availability of huge amounts of pirated works should tip us off that the safe harbors are not working as intended. If anything, the increasing need for takedown notices suggests that things are getting worse for copyright owners, not better. If the internet were becoming a safer place, the number of takedown notices should be decreasing. It’s not surprising that service providers enjoy the status quo, given that the burden of tracking down and identifying infringement doesn’t fall on them, but this is not the balance that Congress intended to strike.

Our comments to the Copyright Office run through the relevant legislative history to show what Congress really had in mind—and it wasn’t copyright owners doing all of the work in locating and identifying infringement online. Instead, as noted in the Senate Report, Congress sought to “preserve[] strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements that take place in the digital networked environment.” The red flag knowledge standard was a key leverage point to encourage service providers to participate in the effort to detect and eliminate infringement. Unfortunately, courts thus far have interpreted the standard so narrowly that, beyond acting on takedown notices, service providers have little incentive to work together with copyright owners to prevent piracy. Even in cases with the most crimson of flags, courts have failed to strip service providers of their safe harbor protection. Perversely, the current case law incentivizes service providers to actively avoid doing anything when they see red flags, lest they gain actual knowledge of infringement and jeopardize their safe harbors. This is exactly the opposite of what Congress intended.

The Second and Ninth Circuits have interpreted the red flag knowledge standard to require knowledge of specific infringing material before service providers can lose their safe harbors. While tech giants might think this is great, it’s terrible for authors and artists who need service providers to carry their share of the load in combating online piracy. Creators are left in a miserable position where they bear the entire burden of policing infringement across an immense range of services, effectively making it impossible to prevent the deluge of piracy of their works. The Second and Ninth Circuits believe red flag knowledge should require specificity because otherwise service providers wouldn’t know exactly what material to remove when faced with a red flag. We argue that Congress intended service providers with red flag knowledge of infringing activity in general to then bear the burden of locating and removing the specific infringing material. This is the balance of responsibilities that Congress had in mind when it crafted the red flag knowledge standard and differentiated it from the actual knowledge standard.

But all hope is not lost. The Second and Ninth Circuits are but two appellate courts, and there are many others that have yet to rule on the red flag knowledge issue. Moreover, the Supreme Court has never interpreted the safe harbors of the DMCA. We hope that our comments will help expose the underlying problem that hurts so many creators today who are stuck playing the DMCA’s whack-a-mole game when their very livelihoods are at stake. Congress intended the DMCA to be the cornerstone of a shared-responsibility approach to fighting online piracy. Unfortunately, it has become a shield that allows service providers to enable piracy on a massive scale without making any efforts to prevent it beyond acting on takedown notices. The fact that search engines can still index The Pirate Bay—an emblematic piracy site that even has the word “pirate” in its name—without concern of losing their safe harbor protection is a testament to how the courts have turned Congress’ intent on its head. We hope that the Copyright Office’s study will shed light on this important issue.

To read our Section 512 comments, please click here.