Image: Louboutin

Louboutin has prevailed in the latest round of a trademark battle in India, with the court issuing an injunction that bars an unaffiliated footwear company from offering up copycat shoes with red sole shoes. At the same time, the court shed light on how we can expect artificial intelligence (“AI”)-generated evidence to be treated going forward. In an August 22 decision, Justice Prathiba M. Singh of the High Court of Delhi preliminarily ordered M/S The Shoe Boutique to refrain from selling shoes that mirror Louboutin’s “spike patterns” and its red sole, noting that the court does not “recognize a monopoly in favor of [Louboutin] for all spiked shoes or colored soles,” and as such, in order to be subject to an injunction, products must be “colorable or a slavish imitation[s]” of the Louboutin designs and its famed red sole trademark for an injunction to be warranted. 

Looking beyond the immediate trademark elements of the lawsuit, the court offered some insight on the use of generative AI platform ChatGPT, which Louboutin relied on in connection with its acquired distinctiveness arguments. Counsel for the French footwear brand argued that in addition to the secondary meaning-centric evidence that it put forth, including the brand’s extensive advertising and long and continuous use of – and third-party media attention to – its red soles, “the reputation that [Louboutin] has garnered can also be evaluated on the basis of a ChatGPT query which was put further on behalf of [Louboutin].” 

Specifically, Louboutin’s counsel submitted a response from ChatGPT (to the question of whether Louboutin is known for spiked men’s shoes), in which the Large Language Model (“LLM”)-powered chatbot stated, “Louboutin is known for their iconic red-soled shoes, including spiked styles for men and women.” And the footwear brand’s team sought to use this as further proof of the brand’s acquired distinctiveness, only to be shut down by the court.

While the court determined that Louboutin presented sufficient evidence to warrant a preliminary injunctive order, it also held that ChatGPT “cannot be the basis of adjudication of legal or factual issues in a court of law,” as the answers provided by such LLM “depend upon a host of factors including the nature and structure of query put by the user, the training data etc., [and] there are possibilities of incorrect responses, fictional case laws, imaginative data etc. generated by AI chatbots.” 

Given that the accuracy and reliability of AI-generated data is still in a “grey area,” the court stated that “AI cannot substitute either the human intelligence or the humane element in the adjudicatory process, [and] at best the tool could be utilized for a preliminary understanding or for preliminary research and nothing more.”

The decision from Justice Singh follows from orders from judges in the U.S., a number of which have required that attorneys appearing in court must attest that no portion of their filings were drafted by generative AI – or if they were, that the information in those filings was checked “by a human being.” Judge Brantley Starr of the U.S. District Court for the Northern District of Texas, for example, addressed the potential for generative AI platforms to engage in hallucinations and thus, provide inaccurate information, and in the process, become the first federal judge to explicitly ban the use of generative AI – “such ChatGPT, Harvey.AI, or Google Bard” – for filings unless the content of those filings has been checked by a human. 

According to Judge Starr’s May 2023 mandate, “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations.”

Magistrate Judge Gabriel Fuentes of the U.S. District Court for the Northern District of Illinois was another early-mover on this front, issuing a revised standing order on June 5 requiring that “any party using any generative AI tool in the preparation of drafting documents for filing with the Court must disclose in the filing that AI was used” with the disclosure identifying the specific AI tool and the manner in which it was used. The Judge’s order also mandates that parties not only disclose whether they used generative AI to draft filings but more fundamentally, whether they used generative AI to conduct corresponding legal research.

The case is Christian Louboutin SAS v. M/S The Shoe Boutique, CS(COMM) 583/2023 and I.A. 15884/2023-15889/2023.