Categories
Antitrust Copyright International Law Internet Uncategorized

Google Image Search and the Misappropriation of Copyrighted Images

Cross-posted from the Mister Copyright blog.

Last week, American visual communications and stock photography agency Getty Images filed a formal complaint in support of the European Union’s investigation into Google’s anti-competitive business practices. The Getty complaint accuses Google of using its image search function to appropriate or “scrape” third-party copyrighted works, thereby drawing users away from the original source of the creative works and preserving its search engine dominance.

Specifically, Getty’s complaint focuses on changes made to Google’s image search functionality in 2013 that led to the appealing image galleries we’re familiar with today. Before the change, users were presented with low-resolution thumbnail versions of images and would be rerouted to the original source website to view a larger, more defined version and to find out how they might legally license or get permission to use the work. But with the current Google Image presentation, users are instantly delivered a large, desirable image and have no need to access the legitimate source. As Getty says in its complaint, “[b]ecause image consumption is immediate, once an image is displayed in high-resolution, large format, there is little impetus to view the image on the original source site.”

According to a study by Define Media Group, in the first year after the changes to Google Image search, image search referrals to original source websites were reduced by up to 80%. The report also provides before and after screenshots of a Google Image search and points out that before 2013, when a thumbnail was clicked, the source site appeared in the background. Not only does the source site not appear in the new version, but an extra click is required to get to the site, adding to the overall disconnect with the original content. Despite Google’s claims to the contrary, the authors of the study conclude that the new image search service is designed to keep users on the Google website.

It’s difficult not to consider Google’s image UI [user interface] change a shameless content grab – one which blatantly hijacks material that has been legitimately licensed by publishers so that Google Image users remain on their site, and are de-incentivized from visiting others.

While Getty’s complaint against Google is based on anticompetitive concerns, it involves the underlying contention that Google Image search enables misappropriation of copyrighted images on a massive scale. Anyone who has run a Google Image search knows that with the click of a mouse, a user is presented with hundreds of images related to their query, and with another simple right click, that user can then copy and paste these images as they please. But Google Image search often returns an abundance of copyright protected images, enabling anyone to copy, display and disseminate images without considering the underlying copyright and existing licenses. And while using the service may be free, make no mistake that Google is monetizing it through advertisements and the mining of users’ personal data.

When users are able to access and copy these full-screen, high resolution images from Google Image search, not only do third-party image providers lose traffic to their website, but the photographers and creators behind the images lose potential income, attribution and exposure that would come with users accessing the original source. As General Counsel Yoko Miyashita explains, “Getty Images represents over 200,000 photojournalists, content creators and artists around the world who rely on us to protect their ability to be compensated for their work.” When Google Image search obviates the need for a user to access the original creative content, these artists and creators are being denied a fair marketplace for their images, and their ability and motivation to create future works is jeopardized.

Shortly after Google changed to the new image search, individual photo publishers and image creators took to a Google Forum to voice their concerns over the effects the service was having on their images and personal web pages. A recurring complaint was that the service made it more difficult to find out information about images and that users now had to go through more steps to reach the original source website. One commenter, identifying herself as a “small time photo publisher,” described Google’s new practice of hotlinking to high-resolution images as a “skim engine” rather than a “search engine.” She lamented that not only was Google giving people access to her content without visiting her site, but her bandwidth usage (i.e. expense) went up due to the hotlinking of her high resolution images.

Google Image supporters argue that creators and image providers should simply use hotlink protection to block Google from displaying their content, but Google’s search engine dominance is so absolute, this would further curtail traffic to the original source of the content. Others suggest image providers stamp their images with watermarks to protect from infringement, but Getty VP Jonathan Lockwood explains that doing so would result in punishment from Google.

They penalise people who try to protect their content. There is then a ‘mismatch penalty’ for the site: you have to show the same one to Google Images that you own. If you don’t, you disappear.

The internet has made sharing creative works and gaining exposure as an artist easier than anyone could have imagined before the digital age, but it has also brought challenges in the form of protecting and controlling creative content. These challenges are particularly burdensome for image creators and providers, whose creative works are subject to unauthorized use the moment they are put online. Over the last few years, Google Image search has contributed to this problem by transforming from a service that provided direction to creative works to a complete substitute for original, licensed content.

With fewer opportunities for image providers and creators to realize a return–whether it be in the form of payment, attribution, or exposure–from their works, creativity and investment in creators will be stifled. Artists and rightsholders deserve fair compensation and credit for their works, and technology needs to work with image providers rather than against them to ensure that great content continues to be created.

Categories
Copyright Internet Legislation Uncategorized

Separating Fact from Fiction in the Notice and Takedown Debate

By Kevin Madigan & Devlin Hartline

U.S. Capitol buidlingWith the Copyright Office undertaking a new study to evaluate the impact and effectiveness of the Section 512 safe harbor provisions, there’s been much discussion about how well the DMCA’s notice and takedown system is working for copyright owners, service providers, and users. While hearing from a variety of viewpoints can help foster a healthy discussion, it’s important to separate rigorous research efforts from overblown reports that offer incomplete data in support of dubious policy recommendations.

Falling into the latter category is Notice and Takedown in Everyday Practice, a recently-released study claiming to take an in-depth look at how well the notice and takedown system operates after nearly twenty years in practice. The study has garnered numerous headlines that repeat its conclusion that nearly 30% of all takedown requests are “questionable” and that echo its suggestions for statutory reforms that invariably disfavor copyright owners. But what the headlines don’t mention is that the study presents only a narrow and misleading assessment of the notice and takedown process that overstates its findings and fails to adequately support its broad policy recommendations.

Presumably released to coincide with the deadline for submitting comments to the Copyright Office on the state of Section 512, the authors claim to have produced “the broadest empirical analysis of the DMCA notice and takedown” system to date. They make bold pronouncements about how “the notice and takedown system . . . meets the goals it was intended to address” and “continues to provide an efficient method of enforcement in many circumstances.” But the goals identified by the authors are heavily skewed towards service providers and users at the expense of copyright owners, and the authors include no empirical analysis of whether the notice and takedown system is actually effective at combating widespread piracy.

The study reads more like propaganda than robust empiricism. It should be taken for what it is: A policy piece masquerading as an independent study. The authors’ narrow focus on one sliver of the notice and takedown process, with no analysis of the systemic results, leads to conclusions and recommendations that completely ignore the central issue of whether Section 512 fosters an online environment that adequately protects the rights of copyright owners. The authors conveniently ignore this part of the DMCA calculus and instead put forth a series of proposals that would systematically make it harder for copyright owners to protect their property rights.

To its credit, the study acknowledges many of its own limitations. For example, the authors recognize that the “dominance of Google notices in our dataset limits our ability to draw broader conclusions about the notice ecosystem.” Indeed, over 99.992% of the individual requests in the dataset for the takedown study were directed at Google, with 99.8% of that dataset directed at Google Search in particular. Of course, search engines do not include user-generated content—the links Google provides are links that Google itself collects and publishes. There are no third parties to alert about the takedowns since Google is taking down its own content. Likewise, removing links from Google Search does not actually remove the linked-to content from the internet.

The authors correctly admit that “the characteristics of these notices cannot be extrapolated to the entire world of notice sending.” A more thorough quantitative study would include data on sites that host user-generated content, like YouTube and Facebook. As it stands, the study gives us some interesting data on one search engine, but even that data is limited to a sample size of 1,826 requests out of 108 million over a six-month period in mid-2013. And it’s not even clear how these samples were randomized since the authors admittedly created “tranches” to ensure the notices collected were “of great substantive interest,” but they provide no details about how they created these tranches.

Despite explicitly acknowledging that the study’s data is not generalizable, the authors nonetheless rely on it to make numerous policy suggestions that would affect the entire notice and takedown system and that would tilt the deck further in favor of infringement and against copyright owners. They even identify some of their suggestions as explicitly reflecting “Public Knowledge’s suggestion,” which is a far cry from a reasoned academic approach. The authors do note that “any changes should take into account the interests of . . . small- and medium-sized copyright holders,” but this is mere lip service. Their proposals would hurt copyright owners of all shapes and sizes.

The authors justify their policy proposals by pointing to the “mistaken and abusive takedown demands” that they allegedly uncover in the study. These so-called “questionable” notices are the supposed proof that the entire notice and takedown system needs fixing. A closer look at these “questionable” notices shows that they’re not nearly so questionable. The authors claim that 4.2% of the notices surveyed (about 77 notices) are “fundamentally flawed because they targeted content that clearly did not match the identified infringed work.” This figure includes obvious mismatches, where the titles aren’t even the same. But it also includes ambiguous notices, such as where the underlying work does not match the title or where the underlying page changes over time.

The bulk of the so-called “questionable” notices comes from those notices that raise “questions about compliance with the statutory requirements” (15.4%, about 281 notices) or raise “potential fair use defenses” (7.3%, about 133 notices). As to the statutory requirements issue, the authors argue that these notices make it difficult for Google to locate the material to take down. This claim is severely undercut by the fact that, as they acknowledge in a footnote, Google complies with 97.5% of takedown notices overall. Moreover, it wades into the murky waters of whether copyright owners can send service providers a “representative list” of infringing works. Turning to the complaint about potential fair uses, the authors argue that copyright owners are not adequately considering “mashups, remixes, or covers.” But none of these uses are inherently fair, and there’s no reason to think that the notices were sent in bad faith just because someone might be able to make a fair use argument.

The authors claim that their “recommendations for statutory reforms are relatively modest,” but that supposed modesty is absent from their broad list of suggestions. Of course, everything they suggest increases the burdens and liabilities of copyright owners while lowering the burdens and liabilities of users, service providers, and infringers. Having overplayed the data on “questionable” notices, the authors reveal their true biases. And it’s important to keep in mind that they make these broad suggestions that would affect everyone in the notice and takedown system after explicitly acknowledging that their data “cannot be extrapolated to the entire world of notice sending.” Indeed, the study contains no empirical data on sites that host user-generated content, so there’s nothing whatsoever to support any changes for such sites.

The study concludes that the increased use of automated systems to identify infringing works online has resulted in the need for better mechanisms to verify the accuracy of takedown requests, including human review. But the data is limited to small surveys with secret questions and a tiny fraction of notices sent to one search engine. The authors offer no analysis of the potential costs of implementing their recommendations, nor do they consider how it might affect the ability of copyright owners to police piracy. Furthermore, data presented later in the study suggests that increased human review might have little effect on the accuracy of takedown notices. Not only do the authors fail to address the larger problem of whether the DMCA adequately addresses online piracy, their suggestions aren’t even likely to address the narrower problem of inaccurate notices that they want to fix.

Worse still, the study almost completely discards the ability of users to contest mistaken or abusive notices by filing counternotices. This is the solution that’s already built into the DMCA, yet the authors inexplicably dismiss it as ineffective and unused. Apart from providing limited answers from a few unidentified survey respondents, the authors offer no data on the frequency or effectiveness of counternotices. The study repeatedly criticizes the counternotice system as failing to offer “due process protection” to users, but that belief is grounded in the notion that a user that fails to send a counternotice has somehow been denied the chance. Moreover, it implies a constitutional right that is not at issue when two parties interact in the absence of government action. The same holds true for the authors’ repeated—and mistaken—invocation of “freedom of expression.”

More fundamentally, the study ignores the fact that the counternotice system is stacked against copyright owners. A user can simply file a counternotice and have the content in question reposted, and most service providers are willing to repost the content following a counternotice because they’re no longer on the hook should the content turn out to be infringing. The copyright owner, by contrast, then faces the choice of allowing the infringement to continue or filing an expensive lawsuit in federal court. The study makes it sound like users are rendered helpless because counternotices are too onerous, but the reality is that the system leaves copyright owners practically powerless to combat bad faith counternotices.

Pretty much everyone agrees that the notice and takedown system needs a tune up. The amount of infringing content available online today is immense. This rampant piracy has resulted in an incredible number of takedown notices being sent to service providers by copyright owners each day. Undoubtedly, the notice and takedown system should be updated to address these realities. And to the extent that some are abusing the system, they should be held accountable. But in considering changes to the entire system, we should not be persuaded by biased studies based on limited (and secret) datasets that provide little to no support for their ultimate conclusions and recommendations. While it may make for evocative headlines, it doesn’t make for good policy.

Categories
Copyright Internet Legislation Uncategorized

Copyright Scholars: Courts Have Disrupted the DMCA’s Careful Balance of Interests

Washington, D.C. at nightThe U.S. Copyright Office is conducting a study of the safe harbors under Section 512 of the DMCA, and comments are due today. Working with Victor Morales and Danielle Ely from Mason Law’s Arts & Entertainment Advocacy Clinic, we drafted and submitted comments on behalf of several copyright law scholars. In our Section 512 comments, we look at one narrow issue that we believe is the primary reason the DMCA is not working as it should: the courts’ failure to properly apply the red flag knowledge standard. We argue that judicial interpretations of red flag knowledge have disrupted the careful balance of responsibilities Congress intended between copyright owners and service providers. Instead of requiring service providers to take action in the face of red flags, courts have allowed them to turn a blind eye and bury their heads in the sand.

Whether Section 512’s safe harbors are working as intended is a hotly contested issue. On the one hand, hundreds of artists and songwriters are calling for changes “to the antiquated DMCA which forces creators to police the entire internet for instances of theft, placing an undue burden on these artists and unfairly favoring technology companies and rogue pirate sites.” On the other hand, groups like the Internet Association, which includes tech giants such as Google and Facebook, claim that the safe harbors are “working effectively” since they “strike a balance between facilitating free speech and creativity while protecting the interests of copyright holders.” The Internet Association even claims that “the increasing number of notice and takedown requests” shows that the DMCA working.

Of course, it’s utter nonsense to suggest that the more takedown notices sent and processed, the more we know the DMCA is working. The point of the safe harbors, according to the Senate Report on the DMCA, is “to make digital networks safe places to disseminate and exploit copyrighted materials.” The proper metric of success is not the number of takedown notices sent; it’s whether the internet is a safe place for copyright owners to disseminate and exploit their works. The continuing availability of huge amounts of pirated works should tip us off that the safe harbors are not working as intended. If anything, the increasing need for takedown notices suggests that things are getting worse for copyright owners, not better. If the internet were becoming a safer place, the number of takedown notices should be decreasing. It’s not surprising that service providers enjoy the status quo, given that the burden of tracking down and identifying infringement doesn’t fall on them, but this is not the balance that Congress intended to strike.

Our comments to the Copyright Office run through the relevant legislative history to show what Congress really had in mind—and it wasn’t copyright owners doing all of the work in locating and identifying infringement online. Instead, as noted in the Senate Report, Congress sought to “preserve[] strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements that take place in the digital networked environment.” The red flag knowledge standard was a key leverage point to encourage service providers to participate in the effort to detect and eliminate infringement. Unfortunately, courts thus far have interpreted the standard so narrowly that, beyond acting on takedown notices, service providers have little incentive to work together with copyright owners to prevent piracy. Even in cases with the most crimson of flags, courts have failed to strip service providers of their safe harbor protection. Perversely, the current case law incentivizes service providers to actively avoid doing anything when they see red flags, lest they gain actual knowledge of infringement and jeopardize their safe harbors. This is exactly the opposite of what Congress intended.

The Second and Ninth Circuits have interpreted the red flag knowledge standard to require knowledge of specific infringing material before service providers can lose their safe harbors. While tech giants might think this is great, it’s terrible for authors and artists who need service providers to carry their share of the load in combating online piracy. Creators are left in a miserable position where they bear the entire burden of policing infringement across an immense range of services, effectively making it impossible to prevent the deluge of piracy of their works. The Second and Ninth Circuits believe red flag knowledge should require specificity because otherwise service providers wouldn’t know exactly what material to remove when faced with a red flag. We argue that Congress intended service providers with red flag knowledge of infringing activity in general to then bear the burden of locating and removing the specific infringing material. This is the balance of responsibilities that Congress had in mind when it crafted the red flag knowledge standard and differentiated it from the actual knowledge standard.

But all hope is not lost. The Second and Ninth Circuits are but two appellate courts, and there are many others that have yet to rule on the red flag knowledge issue. Moreover, the Supreme Court has never interpreted the safe harbors of the DMCA. We hope that our comments will help expose the underlying problem that hurts so many creators today who are stuck playing the DMCA’s whack-a-mole game when their very livelihoods are at stake. Congress intended the DMCA to be the cornerstone of a shared-responsibility approach to fighting online piracy. Unfortunately, it has become a shield that allows service providers to enable piracy on a massive scale without making any efforts to prevent it beyond acting on takedown notices. The fact that search engines can still index The Pirate Bay—an emblematic piracy site that even has the word “pirate” in its name—without concern of losing their safe harbor protection is a testament to how the courts have turned Congress’ intent on its head. We hope that the Copyright Office’s study will shed light on this important issue.

To read our Section 512 comments, please click here.

Categories
Innovation Inventors Legislation Patent Law Patent Litigation Uncategorized

Changes to Patent Venue Rules Risk Collateral Damage to Innovators

dictionary entry for the word "innovate"Advocates for changing the patent venue rules, which dictate where patent owners can sue alleged infringers, have been arguing that their remedy will cure the supposed disease of abusive “trolls” filing suit after suit in the Eastern District of Texas. This is certainly true, but it’s only true in the sense that cyanide cures the common cold. What these advocates don’t mention is that their proposed changes will weaken patent rights across the board by severely limiting where all patent owners—even honest patentees that no one thinks are “trolls”—can sue for infringement. Instead of acknowledging the broad collateral damage their changes would cause to all patent owners, venue revision advocates invoke the talismanic “troll” narrative and hope that nobody will look closely at the details. The problem with their take on venue revision is that it’s neither fair nor balanced, and it continues the disheartening trend of equating “reform” with taking more sticks out every patent owner’s bundle of rights.

Those pushing for venue revision are working on two fronts, one judicial and the other legislative. On the judicial side, advocates have injected themselves into the TC Heartland case currently before the Federal Circuit. Though it has no direct connection to the Eastern District of Texas, advocates see it as a chance to shut plaintiffs out of that venue. Their argument in that case is so broad that it would drastically restrict where all patentees can sue for infringement—even making it impossible to sue infringing foreign defendants. Yet they don’t mention this collateral damage as they sell the “troll” narrative. On the legislative side, advocates have gotten behind the VENUE Act (S.2733), introduced in the Senate last Thursday. This bill leaves open a few more venues than TC Heartland, though it still significantly limits where all patent owners can sue. Advocates here also repeat the “troll” mantra instead of offering a single reason why it’s fair to change the rules for everyone else.

With both TC Heartland and the VENUE Act, venue revision advocates want to change the meaning of one word: “resides.” The specific patent venue statute, found in Section 1400(b) of Title 28, provides that patent infringement suits may be brought either (1) “in the judicial district where the defendant resides” or (2) “where the defendant has committed acts of infringement and has a regular and established place of business.” On its face, this seems fairly limited, but the key is the definition of the word “resides.” The general venue statute, found in Section 1391(c)(2) of Title 28, defines residency broadly: Any juridical entity, such as a corporation, “shall be deemed to reside, if a defendant, in any judicial district in which such defendant is subject to the court’s personal jurisdiction with respect to the civil action in question.” Taken together, these venue statutes mean that patent owners can sue juridical entities for infringement anywhere the court has personal jurisdiction over the defendant.

The plaintiff in TC Heartland is Kraft Foods, a large manufacturer incorporated in Delaware and headquartered in Illinois that runs facilities and sells products in Delaware. The defendant is TC Heartland, a large manufacturer incorporated and headquartered in Indiana. TC Heartland manufactured the allegedly-infringing products in Indiana and then knowingly shipped a large number of them directly into Delaware. Kraft Foods sued TC Heartland in Delaware on the theory that these shipments established personal jurisdiction—and thus venue—in that district. TC Heartland argued that venue was improper in Delaware, but the district court rejected that argument (see here and here). TC Heartland has now petitioned the Federal Circuit for a writ of mandamus, arguing that the broad definition of “reside” in Section 1391(c)(2) does not apply to the word “resides” in Section 1400(b). On this reading, venue would not lie in Delaware simply because TC Heartland did business there.

TC Heartland mentions in passing that its narrow read of Section 1400(b) is favorable as a policy matter because it would prevent venue shopping “abuses,” such as those allegedly occurring in the Eastern District of Texas. Noticeably, TC Heartland doesn’t suggest any policy reasons why Kraft Foods should not be permitted to bring an infringement suit in Delaware, and neither do any of the amici supporting TC Heartland. The amicus brief by the Electronic Frontier Foundation (EFF) et al. argues that Congress could not have intended “to permit venue in just about any court of the patent owner’s choosing.” But why is this hard to believe? The rule generally for all juridical entities is that they can be sued in any district where they chose to do business over matters relating to that business. This rule has long been regarded as perfectly fair and reasonable since these entities get both the benefits and the burdens of the law wherever they do business.

The EFF brief goes on for pages bemoaning the perceived ills of forum shopping in the Eastern District of Texas without once explaining the relevancy to Kraft Foods. It asks the Federal Circuit to “restore balance in patent litigation,” but its vision of “balance” fails to account for the myriad honest patent owners like Kraft Foods that nobody considers to be “trolls.” The same holds true for the amicus brief filed by Google et al. that discusses the “harm forum shopping causes” without elucidating how it has anything to do with Kraft Foods. Worse still, the position being urged by these amici would leave no place for patent owners to sue foreign defendants. If the residency definitions in Section 1391(c) don’t apply to Section 1400(b), as they argue, then a foreign defendant that doesn’t reside or have a regular place of business in the United States can never be sued for patent infringement—an absurd result. But rather than acknowledge this collateral damage, the amici simply sweep it under the rug.

The simple fact is that there’s nothing untoward about Kraft Foods filing suit in Delaware. That’s where TC Heartland purposefully directed its conduct when it knowingly shipped the allegedly-infringing products there. It’s quite telling that venue revision advocates are using TC Heartland as a platform for changing the rules generally when they can’t even explain why the rules should be changed in that very case. And this is the problem: If there’s no good reason for keeping Kraft Foods out of Delaware, then they shouldn’t be advocating for changes that would do just that. Keeping patent owners from suing in the Eastern District of Texas is no reason to keep Kraft Foods out of Delaware, and it’s certainly no reason to make it impossible for all patent owners to sue foreign-based defendants that infringe in the United States. Advocates of venue revision tacitly admit as much when they say nothing about this collateral damage. This isn’t fair and balanced; it’s another huge turn of the anti-patent ratchet disguised as “reform.”

The same is true with the VENUE Act, which copies almost verbatim the venue provisions of the Innovation Act. This bill would also severely restrict where all patent owners can sue by making it so that a defendant doesn’t “reside” wherever a district court has personal jurisdiction arising from its allegedly-infringing conduct. To its credit, the VENUE Act does include new provisions allowing suit where an inventor conducted R&D that led to the application for the patent at issue. It also allows suit wherever either party “has a regular and established physical facility” and has engaged in R&D of the invention at issue, “manufactured a tangible product” that embodies that invention, or “implemented a manufacturing process for a tangible good” in which the claimed process is embodied. Furthermore, the bill makes the same venue rules applicable to patent owners suing for infringement and accused infringers filing for a declaratory judgment, and it solves the problem of foreign-based defendants by stating that the residency definition in Section 1391(c)(3) applies in that situation.

While the proposed changes in the VENUE Act aren’t as severe as those sought by venue revision advocates in TC Heartland, they nevertheless take numerous venues off of the table for patentees and accused infringers alike. But rather than acknowlede these wide-sweeping changes and offer reasons for implementing them, advocates of the VENUE Act simply harp on the narrative of “trolls” in Texas. For example, Julie Samuels at Engine argues that the “current situation in the Eastern District of Texas makes it exceedingly difficult for defendants” to enforce their rights and that we need to “level the playing field.” Likewise, Elliot Harmon at the EFF Blog suggests that the VENUE Act will “finally address the egregious forum shopping that dominates patent litigation” and “bring a modicum of fairness to a broken patent system.” Yet neither Samuels nor Harmon explains why we should change the rules for all patent owners and accused infringers—especially the ones that aren’t forum shopping in Texas.

The VENUE Act would simply take a system that is perceived to favor plaintiffs and replace it with one that definitely favors defendants. For instance, an alleged infringer with continuous and systematic contacts in the Eastern District of Virginia can currently be sued there, but the VENUE Act would take away this option since it’s based on mere general jurisdiction. Likewise, the current venue rules allow suits anywhere the court has specific jurisdiction over the defendant—potentially in every venue for a nationwide enterprise—yet the VENUE Act would make dozens of these venues improper. Furthermore, patentees can now bring suits against multiple defendants in a single forum, saving time and money for all involved, but the VENUE Act would make this possibility much less likely to occur.

The “troll” narrative employed by venue revision advocates may sound appealing on the surface, but it quickly becomes clear that they either haven’t considered or don’t care about how their proposed changes would affect everyone else. If we’re going to talk about abusive litigation practices in need of revision, we should talk about where they’re occurring across the entire patent system. This discussion should include the practices of both patent owners and alleged infringers, and we should directly confront the systemic collateral damage that any proposed changes would cause. As it stands, there’s little hope that the current myopic focus on “trolls” will lead to any true reform that’s fair and balanced for everyone.

Categories
Copyright Internet Uncategorized

Attacking the Notice-and-Staydown Straw Man

Ever since the U.S. Copyright Office announced its study of the DMCA last December, the notice-and-staydown issue has become a particularly hot topic. Critics of notice-and-staydown have turned up the volume, repeating the same vague assertions about freedom, censorship, innovation, and creativity that routinely pop up whenever someone proposes practical solutions to curb online infringement. Worse still, many critics don’t even take the time to look at what proponents of notice-and-staydown are suggesting, choosing instead to knock down an extremist straw man that doesn’t reflect anyone’s view of how the internet should function. A closer look at what proponents of notice-and-staydown are actually proposing reveals that the two sides aren’t nearly as far apart as critics would have us believe. This is particularly true when it comes to the issue of how well notice-and-staydown would accommodate fair use.

For example, Joshua Lamel’s recent piece at The Huffington Post claims that “innovation and creativity are still under attack” by the “entertainment industry’s intense and well-financed lobbying campaign” pushing for notice-and-staydown. Lamel argues that the “content filtering proposed by advocates of a ‘notice and staydown’ system . . . would severely limit new and emerging forms of creativity.” And his parade of horribles is rather dramatic: “Parents can forget posting videos of their kids dancing to music and candidates would not be able to post campaign speeches because of the music that plays in the background. Remix culture and fan fiction would likely disappear from our creative discourse.” Scary stuff, if true. But Lamel fails to cite a single source showing that artists, creators, and other proponents of notice-and-staydown are asking for anything close to this.

Similarly, Elliot Harmon of the Electronic Frontier Foundation (EFF) argues that “a few powerful lobbyists” are pushing for notice-and-staydown such that “once a takedown notice goes uncontested, the platform should have to filter and block any future uploads of the same allegedly infringing content.” Harmon also assumes the worst: “Under the filter-everything approach, legitimate uses of works wouldn’t get the reasonable consideration they deserve,” and “computers would still not be able to consider a work’s fair use status.” Like Lamel, Harmon claims that “certain powerful content owners seek to brush aside the importance of fair use,” but he doesn’t actually mention what these supposed evildoers have to say about notice-and-staydown.

Harmon’s suggestion that the reliance on uncontested takedown notices gives inadequate consideration to fair use is particularly strange as it directly contradicts the position taken by the EFF. Back in October of 2007, copyright owners (including CBS and Fox) and service providers (including Myspace and Veoh) promulgated a list of Principles for User Generated Content Services. These Principles recommend that service providers should use fingerprinting technology to enact notice-and-staydown, with the general caveat that fair use should be accommodated. Two weeks later, the EFF published its own list of Fair Use Principles for User Generated Video Content suggesting in detail how notice-and-staydown should respect fair use.

The EFF’s Fair Use Principles include the following:

The use of “filtering” technology should not be used to automatically remove, prevent the uploading of, or block access to content unless the filtering mechanism is able to verify that the content has previously been removed pursuant to an undisputed DMCA takedown notice or that there are “three strikes” against it:

1. the video track matches the video track of a copyrighted work submitted by a content owner;
2. the audio track matches the audio track of that same copyrighted work; and
3. nearly the entirety (e.g., 90% or more) of the challenged content is comprised of a single copyrighted work (i.e., a “ratio test”).

If filtering technologies are not reliably able to establish these “three strikes,” further human review by the content owner should be required before content is taken down or blocked.

Though not explicitly endorsing notice-and-staydown, the EFF thinks it’s entirely consistent with fair use so long as (1) the content at issue has already been subject to one uncontested takedown notice, or (2) the content at issue is at least a 90% match to a copyrighted work. And the funny thing is that supporters of notice-and-staydown today are actually advocating for what the EFF recognized to be reasonable over eight years ago.

While Harmon never explicitly identifies the “powerful lobbyists” he accuses of wanting to trample on fair use, he does link to the Copyright Office’s recently-announced study of the DMCA and suggest that they can be found there. Reading through that announcement, I can only find three citations (in footnote 36) to people advocating for notice-and-staydown: (1) Professor Sean O’Connor of the University of Washington School of Law (and Senior Scholar at CPIP), (2) Paul Doda, Global Litigation Counsel at Elsevier, and (3) Maria Schneider, composer/conductor/producer. These three cites all point to testimonies given at the Section 512 of Title 17 hearing before the House Judiciary Committee in March of 2014, and they show that Harmon is attacking a straw man. In fact, all three of these advocates for notice-and-staydown seek a system that is entirely consistent with the EFF’s own Fair Use Principles.

Sean O’Connor seeks notice-and-staydown only for “reposted works,” that is, “ones that have already been taken down on notice” and that are “simply the original work reposted repeatedly by unauthorized persons.” His proposal only applies to works that “do not even purport to be transformative or non-infringing,” and he specifically excludes “mash-ups, remixes, covers, etc.” This not only comports with the EFF’s recommendations, it goes beyond them. Where the EFF would require either a previously uncontested notice or at least a 90% match, O’Connor thinks there should be both an uncontested notice and a 100% match.

The same is true for Paul Doda of Elsevier, who testifies that fingerprinting technology is “an appropriate and effective method to ensure that only copies that are complete or a substantially complete copy of a copyrighted work are prevented or removed by sites.” Doda explicitly notes that filtering is not suitable for “works that might require more detailed infringement analysis or ‘Fair Use’ analysis,” and he praises YouTube’s Content ID system “that can readily distinguish between complete copies of works and partial copies or clips.” Doda’s vision of notice-and-staydown is also more protective of fair use than the EFF’s Fair Use Principles. While the EFF suggests that a previously uncontested notice is sufficient, Doda instead only suggests that there be a substantial match.

Unlike O’Connor and Doda, Maria Schneider is not a lawyer. She’s instead a working musician, and her testimony reflects her own frustrations with the whack-a-mole problem under the DMCA’s current notice-and-takedown regime. As a solution, Schneider proposes that creators “should be able to prevent unauthorized uploading before infringement occurs,” and she points to YouTube’s Content ID as evidence that “it’s technically possible for companies to block unauthorized works.” While she doesn’t explicitly propose that there be a substantial match before content is filtered, Schneider gives the example of her “most recent album” being available “on numerous file sharing websites.” In other words, she’s concerned about verbatim copies of her works that aren’t possibly fair use, and nothing Schneider recommends contradicts the EFF’s own suggestions for accommodating fair use.

Lamel and Harmon paint a picture of powerful industry lobbyist boogeymen seeking an onerous system of notice-and-staydown that fails to adequately account for fair use, but neither produces any evidence to support their claims. Responses to the Copyright Office’s DMCA study are due on March 21st, and it will be interesting to see whether any of these supposed boogeymen really show up. There’s little doubt, though, that critics will continue attacking the notice-and-staydown straw man. And it’s really a shame, because advocates of notice-and-staydown are quite conscious of the importance of protecting fair use. This is easy to see, but first you have to look at what they’re really saying.

Categories
Copyright Innovation Internet Uncategorized

Endless Whack-A-Mole: Why Notice-and-Staydown Just Makes Sense

Producer Richard Gladstein knows all about piracy. As he recently wrote for The Hollywood Reporter, his latest film, The Hateful Eight, was “viewed illegally in excess of 1.3 million times since its initial theatrical release on Christmas Day.” Gladstein is not shy about pointing fingers and naming names. He pins the blame, in no small part, on Google and (its subsidiary) YouTube—the “first and third most trafficked websites on the internet.” While acknowledging that fair use is important, Gladstein argues that it has become “an extremely useful tool for those looking to distract from or ignore the real copyright infringement issue: piracy.” His point is that it’s simply not fair use when someone uploads an entire copyrighted work to the internet, and claims that service providers can’t tell when something is infringing are disingenuous.

Gladstein questions why Google and YouTube pretend they are “unable to create and apply technical solutions to identify where illegal activity and copyright infringement are occurring and stop directing audiences toward them.” In his estimation, “Google and YouTube have the ability to create a vaccine that could eradicate the disease of content theft.” While Gladstein doesn’t mention the DMCA or its notice-and-takedown provisions specifically, I think what he has in mind is notice-and-staydown. That is, once a service provider is notified that the copyright owner has not authorized a given work to be uploaded to a given site, that service provider should not be able to maintain its safe harbor if it continues hosting or linking to the given work.

No small amount of ink has been spilled pointing out that the DMCA’s notice-and-takedown provisions have led to an endless game of whack-a-mole for copyright owners. Google’s own transparency report boasts how its search engine has received requests to take down over 63 million URLs in the past month alone. And it helpfully tells us that it’s received over 21 million such requests over the past four years for just one site: rapidgator.net. Google’s transparency doesn’t extend to how many times it’s been asked to remove the same work, nor does it tell us anything about takedown requests for YouTube. But there’s no reason to think those numbers aren’t equally as frustrating for copyright owners.

The question one should ask is why these numbers aren’t frustrating for Google and YouTube, as they have to deal with the deluge of notices. Apparently, they don’t mind at all. According to the testimony of Google’s Senior Copyright Policy Counsel, Katherine Oyama, the “DMCA’s shared responsibility approach works.” Oyama notes that Google has spent tens of millions of dollars creating the infrastructure necessary to efficiently respond to the increasing number of takedown notices it receives, but many (if not most) copyright owners don’t have those kinds of resources. For them, it’s daily battles of manually locating infringements across the entire internet and sending takedown notices. For Google, it’s mostly-automated responses to take down content that otherwise brings ad-based revenue.

These struggles hit individual authors and artists the hardest. As the U.S. Copyright Office noted in its recently-announced study of the DMCA, “[m]any smaller copyright owners . . . lack access to third-party services and sophisticated tools to monitor for infringing uses, which can be costly, and must instead rely on manual search and notification processes—an effort that has been likened to ‘trying to empty the ocean with a teaspoon.’” What makes the process so frustrating—and futile—is the fact that the same works get uploaded to the same platforms time and time again. And any time spent sending the same takedown notice to the same service provider is time that is not spent honing one’s craft and creating new works.

Gladstein is correct: Service providers like Google and YouTube could be doing more. And, somewhat ironically, doing more for copyright owners would actually mean that both sides end up doing less. The obvious solution to the whack-a-mole problem is notice-and-staydown—it just makes sense. There’s simply no reason why a copyright owner should have to keep telling a service provider the same thing over and over again.

Those who object to notice-and-staydown often point out that the DMCA process is susceptible to abuse. Indeed, there are some who send notices in bad faith, perhaps to silence unwanted criticism or commentary. But there’s no reason to think that such abuse is the rule and not the exception. Google’s own numbers show that it complied with 97% of notices in 2011 and 99% of notices in 2013. That’s still a potentially-significant amount of abuse from notice-senders, but it’s also certainly a ton of intentional abuse from infringers whose conduct generated the legitimate notices in the first place. And the vast majority of those infringers won’t get so much as a slap on the wrist.

Turning back to Gladstein’s theme, discussions about fair use or takedown abuse are beside the point. The simple fact is that garden-variety copyright infringement involves neither issue. As CPIP Senior Scholar Sean O’Connor testified to Congress, “for many artists and owners the majority of postings are simply straight-on non-transformative copies seeking to evade copyright.” It’s this simple piracy, where entire works are uploaded to the internet for all to take, that concerns copyright owners most. Gladstein cares about the 1.3 million illicit distributions and performances of The Hateful Eight that are obviously infringing, not the commentary of critics that would obviously be fair use. And takedown notices sent because of these illicit uploads are anything but abusive—the abusers are the infringers.

The technology to make notice-and-staydown work already exists. For example, Audible Magic and YouTube both have the technology to create digital fingerprints of copyrighted works. When users later upload these same works to the internet, the digital fingerprints can be matched so that the copyright owner can then control whether to allow, monetize, track, or block the upload altogether. This technology is a great start, but it’s only as good as its availability to copyright owners. The continued proliferation of infringing works on YouTube suggests that this technology isn’t being deployed properly. And Google has no comparable technology available for its search engine, leaving copyright owners with little choice but to continue playing endless whack-a-mole.

Fortunately, the tides have been turning, especially as the technology and content industries continue to merge. And strides are being made in the courts as well. For example, a Court of Appeal in Germany recently held that YouTube has the duty to both take down infringing content and to make sure that it stays down. A quick search of YouTube today shows that The Hateful Eight, which is still in theaters, is legitimately available for pre-order and is illicitly available to be streamed right now. One wonders why YouTube chooses to compete with itself, especially when it has the tool to prevent such unfair competition. Regardless, there is real hope that Gladstein’s call for a “vaccine that could eradicate the disease of content theft” will be just what the doctor ordered—and that “vaccine” is notice-and-staydown.

[Update: This post unintentionally generated confusion as to whether I think notice-and-staydown means that fingerprinting technologies should be used with search engines. I do not think that would work well. I explain how search engines could do more to help copyright owners with the whack-a-mole problem in this follow-up post.]

Categories
Commercialization Innovation Inventors Patent Licensing Uncategorized

Google’s Patent Starter Program: What it Really Means for Startups

The following guest post comes from Brad Sheafe, Chief Intellectual Property Officer at Dominion Harbor Group, LLC.

By Brad Sheafe

Recalling its rags-to-riches story of two guys with nothing but a great idea, a garage, and a hope of making the world a better place, Google recently announced its new Patent Starter Program. As part of its commitment to the culture from which it came, Google claims that it simply wants to help startups navigate the patent landscape by assigning them certain patents while it receives a license back. It describes the situation as follows:

The world of patents can be very confusing, cumbersome and often distracting for startups. All too often these days, the first time a startup has to deal with a patent issue is when a patent troll attacks them. Or when a prospective investor may ask them how they are protecting their ideas (“You don’t have any patents???”). These problems are the impetus behind the Patent Starter Program[.]

There are of course many tendentious assertions here – from the well-established definitional problems with the use of the pejorative term “patent troll,” which is often used to attack startups, to the untrue statement that patents are “distracting” for startups (which is false, as any person who watches Shark Tank knows). But we will not go over this well-tread territory here. For our purposes, this statement is notable because it is couched entirely in terms of a desire to help other tech startups. But when one looks at the specific details of the Patent Starter Program (PSP), it’s quite clear that it is designed to benefit Google as well – perhaps even most of all.

On its face, the PSP is advertised as an opportunity for the first 50 eligible participants (“startups or developers having 2014 Revenues between US $500,000 and US $20,000,000”) to select 2 families from Google’s patent portfolio out of an offering of between 3-5 families of Google’s choosing. These families are intended to be broadly relevant to the participant’s business, but Google makes no guarantee that they will be, and there is no “re-do” if the participant doesn’t like what Google offers the first time.

In exchange for access to these patents, many are not paying attention to the fine print that creates some significant contractual restrictions on anyone who uses the PSP. First and foremost, the patents cannot be used to initiate a lawsuit for infringement. They can be used only “defensively,” that is, if the participant is sued for infringement first. In fact, if a participant does choose to assert the supposedly-owned patent rights outside of Google’s terms, the Patent Purchase Agreement punishes the startup by requiring “additional payments” to be made to Google.

The boilerplate text of the Agreement states that this additional payment will be $1 million or more! Although specific payments may end up varying from this based on the negotiating tactics of the startups who make use of the PSP, the punitive nature of this payment is clear. For an undercapitalized startup that is just starting out in the marketplace and perhaps still living on the life support provided by venture capitalists, a $1+ million payment is a monumental charge to write down. This is especially the case if the startup is simply exercising a valid legal right that is integral to all property ownership – the right to keep others from trespassing on one’s property.

Additionally, participants in the PSP must also join the LOT Network (LOT stands for “License on Transfer”), which presents itself as a cross-licensing network committed to reducing the alleged “PAE problem.” Members of the LOT Network must “grant a portfolio-wide license to the other participants” in the LOT Network, but “the license becomes effective ONLY when the participant transfers one or more patents to an entity other than another LOT Network participant, and ONLY for the patent(s) actually transferred.”

On its face, this might still seem a reasonable concession for the “free” acquisition of some of Google’s patents. But the fine print makes it clear that there are additional burdens agreed to by the startup. First, the LOT Network agreement includes all of the participant’s patents, and not just those it acquires from Google. Second, even if one decides later to withdraw from the LOT Network, the agreement explicitly states that all of the patents owned by the participant at the time of withdrawal will continue to remain subject to the terms of the LOT agreement. The LOT Network thus operates in much the same way Don Corleone viewed membership in the “family” – people are welcome in on certain non-negotiable terms, and good luck ever getting out.

These all add up to be incredibly onerous and surprising restrictions on startups, which often need flexibility in the marketplace to adopt their business models. But as the old, late-night television commercials used to say, “But wait, there’s more!” If the terms and conditions of the LOT Network seem highly limiting on the rights associated with patent ownership and overly broad in terms of who gets a license to the applicant’s patents, there’s an even greater surprise in the license-back provisions of Google’s Patent Purchase Agreement. Once one wades through the legalese, it becomes clear that while a participant in the PSP and LOT Network nominally owns the patents granted by Google, these patents are effectively licensed to everyone doing anything.

There is substantial legalese here that is clearly “very confusing, cumbersome and . . . distracting for startups,” the very charge leveled by Google against the patent system as the justification for the PSP and LOT Network. We’ll break it all down in a moment, but here’s the contractual language that creates this veritable universal license. The agreement gives Google, its “Affiliates” (defined to include any “future Affiliates, successors and assigns”), and its “Partners” (defined as “all agents, advisors, attorneys, representatives, suppliers, distributors, customers, advertisers, and users of [Google] and/or [Google] Affiliates”) a license to the patents Google grants to the participant if the participant were ever to allege infringement by any of these partners through their use of any of Google’s “Products” (defined as “…all former, current and future products, including but not limited to services, components, hardware, software, websites, processes, machines, manufactures, and any combinations and components thereof, of [Google] or any [Google] Affiliates that are designed, developed, sold, licensed, or made, in whole or substantial part, by or on behalf of that entity”).

So let’s review: A startup can acquire some patents from Google, but only from the handful of patents that Google itself picks out (which may or may not relate to the participant’s business). The startup must agree to an incredibly broad license-back provisions and promise not to assert any ownership rights (unless the participant gets sued first) on penalty of $1+ million payment to Google. And the startup is bound to join the LOT Network, where Google execs are on the Board of Directors, which further reduces the rights not only in the patents granted by Google, but in the startup’s entire portfolio of patents, including most importantly patents not acquired from Google.

To be fair, Google is far from the only large corporation to take advantage of its size and financial strength to mold public perception, markets, and even government policy to its liking. Some might even turn a blind eye, calling it “good business” and accepting such behavior as the price we all must pay for the products and services that established corporations like Google offer. To some extent, there is some truth in this – most of us use Google’s services every day and many of us working in the innovation industries continue to be impressed with its innovative approach to those services and its products.

When it comes to the underpinnings of the innovation economy – the startups that drive economic growth and the patent system that provides startups with legal and financial security against established market incumbents (again, as any episode of Shark Tanks makes clear) – the restrictive contractual conditions in the PSP and LOT Network give one pause. After all, Google began as a startup relying on fully-licensable IP, despite the fact that Google apparently wants us all to forget about its founding page-rank patent (Patent No. 6,285,999, filed on January 9, 1998). One will search in vain in Google’s corporate history website, for instance, for evidence of Larry Page’s patent. Yet it’s well-established that Google touted this “patent-pending” search technology in its announcement in 1999 that it had received critical venture-capital funding.

The next Google is out there, counting on the same patent rights to be in place for it to rely upon just as Google did in the late 1990s. Instead of making every effort to collapse the very structure on which its success was built, shouldn’t Google be the first to defend it? Competition will always be the greatest motivator for those who have what it takes to compete – and with its balance sheet and world-renowned collection of bright, inventive minds, Google should not be afraid of competition. Or worse, give the appearance of promoting competition and then use that appearance to dupe potentially competitive startups into emasculating the intellectual property those startups need to actually compete.

So, if Google and its far-flung business partners in the high-tech sector want to support startups on terms that are reasonable for both the startup and Google given their relative positions, there is certainly nothing wrong with this. But, Google shouldn’t hide behind the bugaboos of “patent trolls” and the supposed “complexity” of a patent system designed to benefit small innovators in order to drive a largely one-sided partnership while hiding behind confounding legalese that certainly does not match its feel-good rhetoric to startups, to Congress, or to the public.

If an established company wants to support innovation by providing worthy startups with the stepping stones they need for success, then go for it! Everyone should be 100% behind that concept – but that is not what Google’s PSP or the LOT Network represent. These aren’t stepping stones to successful innovation, but rather they are deliberately fashioned and enticingly placed paving stones that lead to the shackling of startups with terms and covenants that give the appearance of ownership but strip away the very rights that make that ownership meaningful – and all the while Google benefits both from the relationship and the public perception of munificence. When one is using someone else’s idea, one should compensate them for it, and the nature of the license and the compensation should certainly match what one is saying publicly about this agreement.

All we can ask, Google, is that you treat others as you were treated in the past as a startup, and now approximately fifteen years later as a market incumbent just, well, Don’t Be Evil.