Anthropic Defends Fair Use in Legal Battle with Music Publishers
Last week, AI company Anthropic made a formal declaration of its fair use defense in the ongoing legal dispute with three music publishers. The company stated that using lyrics to train an AI model, especially one that cannot output the texts of those songs, constitutes classic fair use and does not infringe on the copyrights of the plaintiffs.
The declaration comes as Anthropic, supported by Microsoft and Google, urged the Tennessee courts to reject a preliminary injunction sought by Universal Music Publishing, Concord, and ABKCO, which would affect its chatbot Claude.
Legal Battle Overview
In October 2023, the music publishers sued Anthropic for copyright infringement and subsequently requested an injunction to compel the AI company to ensure that their lyrics are not used to train any future AI models and to prevent the current version of Claude from producing any lyrics owned by the music companies.
Anthropic addressed these requests separately in its response to the court. The fair use defense specifically pertains to the first part of the injunction. The music industry contends that training a generative AI model with existing content necessitates permission from copyright owners, thus arguing that Anthropic is liable for copyright infringement because it trained Claude with lyrics without a license from the publishers.
However, Anthropic argues that many tech companies believe AI training constitutes ‘fair use’ under American law, meaning no permission is necessary. The company’s latest legal filing states, “Relying on the fair use doctrine, courts have consistently found that making ‘intermediate’ copies of copyrighted materials to develop new technologies does not violate copyright law.”
Claude’s Functionality and Safeguards
Regarding the second part of the injunction, Anthropic asserts that court intervention is unnecessary, as Claude is not intended to provide users with any lyrics owned by the music companies. The company states that the purpose of training on data that includes songs is to learn general ideas and concepts about language, not to reproduce the lyrics themselves.
The publishers claim that if a user prompts Claude to provide lyrics to songs they have published, the chatbot will provide responses containing all or significant portions of those lyrics. In response, Anthropic claims that it has voluntarily built additional safeguards to prevent the display of the publishers’ works, making it unlikely for future users to prompt Claude to reproduce any material portion of the works-in-suit. However, tests performed show that Claude still regurgitated some key elements of the publishers’ lyrics.
Anthropic previously responded to the music companies’ lawsuit, focusing mainly on jurisdiction issues. The publishers took legal action in Tennessee, while Anthropic argues that the litigation should take place in California, where it is based, and where many other lawsuits testing the copyright obligations of AI companies have been filed.
In conclusion, Anthropic’s legal battle with the music publishers continues to raise important questions about the use of copyrighted materials in training AI models, and whether the ‘fair use’ doctrine applies in this context.