Shallow Harbors: EU Poised To Rewrite Rules For User-Generated Content

Almost from the day the Digital Millennium Copyright Act came into effect, copyright owners have sought to limit the so-called safe harbor protections against infringement liability the law grants to online service providers that host user-uploaded content.

But a series of lawsuits aimed at setting strict limits on the safe harbors, starting at least as early as Perfect 10’s 2002 litigation against CCBill and stretching through the Veoh cases and Viacom’s long-running battle with YouTube, largely failed in that regard and arguably made things worse for rights owners. The result was a series of court rulings reinforcing the strict and precise requirements of the notice-and-takedown system the law spells out for getting infringing content removed from online platforms.

Legislative efforts to limit or weaken the safe harbors fared no better, culminating in the spectacular crash-and-burn in 2012 of the Stop Online Piracy Act (SOPA) in the House and the PROTECT-IP Act (PIPA) in the Senate, which largely scared Congress off similar attempts ever since.

The legal and legislative battles over the scope of online safe harbors outside the U.S. haven’t followed precisely the same paths as here, due to differences in the statutes and case law, but they have led to the same result, more or less: the safe harbors have remained safe for the online platforms.

The tide may finally be starting to turn, however. This week, the Legal Affairs committee of the European Parliament voted narrowly to approve the European Union’s controversial Copyright Reform Directive. The vote establishes the Parliament’s official position on the proposed directive ahead of final discussions with the European Council and the individual member states, although opponents of the measure may yet be able to force a vote of the full Parliament before those discussions can begin.

The most hotly contest provision of the directive is Article 13, which for the first time would establish an affirmative duty on the part of service providers to actively police the content posted by their users for infringing material:

Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter (emphasis added).

That highlighted clause has been the Holy Grail of rights owners from the beginning. As interpreted by the courts, the DMCA imposes no obligation on service providers who follow the requirements of the notice-and-takedown system to actively search for infringing content on their platforms or to filter such content before it can be uploaded.

Instead, the burden of scouring YouTube, Facebook, Instagram, SoundCloud and hundreds of other user-posted content platforms has fallen entirely on rights holders, who then must follow strict procedures to get it taken down. Rights owners have long complained the task is essentially Sisyphean, since each takedown simply leads to a reposting, forcing them to go through the same process all over again.

A few platforms, notably YouTube and Facebook, have offered rights owners tools such as YouTube’s Content ID to partly automate the process. But the results are still after the fact.

What rights owners have long sought — and courts have long denied them — is to impose a legal obligation on service providers to police their own platforms and prevent the uploading of infringing content.

Should the EU’s Copyright Reform Directive become law (still not certain) it would mark the first time a law in a major market imposed such a requirement.

The EU directive, of course, would apply on with the EU, not in the U.S. But as I noted in the RightsTech blog this week, its impact could be far-reaching.

For one thing, it could encourage rights owners to risk going back to the U.S. Congress to demand the same level of protection here has they would get in Europe.

Further, Article 13’s requirement that service providers take measures “such as the use of effective content recognition technologies” could spur the development of many more technologies like YouTube’s Content ID system. Once those tools exist and are in place, pressure will no doubt grow to deploy them in the U.S. as well. Or, as with the EU’s General Data Protection Regulation (GDPR), U.S. based platforms may simply deploy such technologies globally because it would be more efficient than maintaining two different upload mechanisms.

The net result could be de facto shoaling of the DMCA’s safe harbors.