Article

Attorneys Kristen McCallion and Darra Loganzo Co-Author World Trademark Review Article "Could AI Require Platforms to Do More to Prevent Infringement?"

Authors

Fish & Richardson attorneys Kristen McCallion and Darra Loganzo co-authored an article for World Trademark Review discussing how continual advancements in technology could cause platforms to be required to take new measures to police their sites for infringing content.

Read the full article on World Trademark Review.


Like a game of whack-a-mole, as users copy, paste, upload, and post online, infringing content can pop back up almost as quickly as it is taken down. This creates a continuous monitor-and-takedown process for brand owners and internet service providers (ISPs) such as website hosting companies, computer system operators, social networks, intermediary platforms, and e-commerce sites. Such platforms and providers need to navigate continual advancements in technology and the law to minimise their potential liability for hosting infringing content. And as the ease and speed of posting have increased, so have preventive measures. For example, technology with image search capabilities can assist with identifying and removing infringing content that cannot be identified simply through a basic text search.

Consider the online search landscape. In 2011, Google released its reverse-image search capability. Microsoft Bing followed not long behind, releasing its reverse-image search tool in 2014. Today, mobile phones auto-categorise photos using facial recognition and image filtration software. Trademark lawyers also use platforms that implement a similar artificial intelligence (AI) search function to search and clear logos and design marks. These capabilities have their limitations (eg, false positive images), but in just the past decade, this technology has drastically improved as it has become commonplace.

AI search tools are also entering the mainstream. There are art AI tools that can identify, use, and combine millions of images across the internet to generate new artwork, and there are high-level conversational text-generator AI tools like ChatGPT that “research” the internet and provide an intelligent, human-like response to inquiries.

As these types of tools become more common, intermediary platforms may be expected to use them to monitor and even, perhaps, prevent infringement.

Courts have not yet answered the question of how far intermediary platforms must go to prevent infringement by third parties—or even if such a requirement exists. Courts may find intermediary platforms secondarily liable for trademark and copyright infringement if they (i) continue to supply services to someone they knew or had reason to know was engaging in trademark infringement and (ii) have “direct control and monitoring” over infringing content posted by third-party users on their websites (Louis Vuitton Malletier v. Akanoc Solutions, 658 F.3d 936, 942 (9th Circuit 2011).

The Second Circuit, in Tiffany (NJ) v. eBay (600 F.3d 93, 107–08 (2d Circuit 2010)) has held that the requisite knowledge must be of “specific instances of actual infringement,” not merely “general knowledge,” while the Ninth Circuit has commented that even “constructive knowledge” may be enough to hold a web-hosting company liable.

It seems clear at the moment that intermediary platforms do not have an affirmative duty to proactively search for potential infringing content. However, they are still expected to take on some form of supervisory role once they become aware of specific instances and particularised facts of infringement. For instance, the safe harbor provisions of the Digital Millennium Copyright Act (“DMCA”) require ISPs to disable or remove infringing content upon receipt of a compliant takedown notice, and most social media platforms and websites have takedown protocols in place.

In 2017, the Ninth Circuit held that a server owner did not have to undertake the “onerous and unreasonably complicated” manual process of searching hundreds of words proposed by a copyright owner to locate infringing content (Perfect 10 v. Giganews (847 F.3d 657, 671 (9th Circuit 2017))). Because “there were no simple measures available” that the defendant “failed to take to remove” the material, the court refused to find contributory infringement liability. But today, keyword searching is easier, faster, and perhaps not so “onerous and unreasonably complicated.”

The Ninth Circuit is currently deciding the culpability of e-commerce platform Redbubble (oral arguments were heard on 12 January 2023 in Atari Interactive v. Redbubble (No. 21-17062 (2d Circuit 2023) and YYGM SA v. Redbubble (No. 21-56236 (2d Circuit 2023))) and whether it infringed by allegedly knowingly selling products bearing trademarks owned by Atari Interactive Inc and YYGM SA (d/b/a Brandy Melville) and failing to take requisite steps to prevent infringement. The district court in each case suggested that Redbubble might need to take at least some proactive measures as an intermediary platform.

In Atari Interactive v. Redbubble, the United States District Court for the Northern District of California found that Redbubble’s “attempts to screen” its website for infringing content is not an “unreasonable” process, “[g]iven that use of trademarked content is difficult to detect without input from the trademark owners.”

It also addressed Atari’s proposal to implement a keyword search for certain terms. The court did not decide whether Redbubble should be (or must be) “disabling search terms” to prevent users from searching and finding infringing content, but it did note that “Redbubble is not required to disable functionality capable of substantial non-infringing use merely because some parties may use it to infringe.”

On summary judgment, the court found genuine disputes of material fact on the issue of contributory infringement, but held that Atari did not show that Redbubble was willfully blind. The jury later found that Redbubble was not liable on Atari’s direct or vicarious infringement claims.

In YYGM SA v. Redbubble, the United States District Court for the Central District of California assessed Brandy Melville’s argument that Redbubble “has not disabled a shopper’s ability to search for the keywords ‘Brandy Melville’ as Brandy Melville has requested.” The court noted that “[w]hile maintaining such a functionality may perform the legitimate service of helping customers find products that are similar to Brandy Melville’s without actually infringing them, it also gives users seeking to peddle infringing products a ready means of doing so.”

The court found Brandy Melville’s argument “persuasive” but left the question of contributory infringement to the jury and denied summary judgment. The jury found Redbubble was liable for contributory infringement of the Brandy Melville trademarks.

During back-to-back oral argument on the appeals of these two cases, the Ninth Circuit noted that the “tension” between the district courts’ perspectives of Redbubble’s liability is “troubling.” The Ninth Circuit also acknowledged the gray area in the law about the level of “policing” and “measures” that intermediary platforms are required to take.

These cases suggest that while courts may not know exactly what steps intermediary platforms must take, they should not be overly taxing or time-consuming to implement. New AI search technology may simplify this process and even expand it into image searches.

No US court has yet held that an intermediary platform must have a system in place to reverse-image search visual content to thwart and take down infringing content. But in the next five to 10 years, if advanced AI tools become more widespread and less expensive for intermediary platforms to use, a court could hold that these tools must be used not only to monitor and take down infringing content, but prevent it as well.