AI companies ‘could exploit artists’
The misuse of AI companies could exploit artists.
Artificial intelligence (AI) has ushered in transformative changes across diverse industries, but recent controversies have brought to light concerns surrounding its potential misuse, particularly in the context of copyrighted material. A former executive from a leading tech startup has raised alarms about the potential exploitation of musicians through the unauthorized use of copyrighted music by AI companies.
At the heart of the issue lies the training of AI models on an extensive database of existing songs, employing them to generate new music based on text prompts. Ed Newton-Rex, who resigned from his role at Stability AI’s audio team, voiced disagreement with the company’s assertion that training generative AI models on copyrighted works falls under “fair use” of the material.
This contentious approach has already led to legal disputes, as evidenced by The New York Times filing a lawsuit against Open AI, the creator of the revolutionary ChatGPT. The newspaper alleges that Open AI, backed by Microsoft, unlawfully used its articles to train and create ChatGPT, which now competes with traditional news sources and jeopardizes the newspaper’s ability to provide reliable information.
The parallel between the media and music industries prompts broader questions about the ethical implications of AI’s use of copyrighted material. Emad Mostaque, co-founder, and CEO of Stability AI, argues that fair use supports creative development. However, critics express concerns that this interpretation could lead to the exploitation of artists and creators, as their work becomes the foundation for AI models without explicit consent.