In generative AI and music, will nice guys finish last?
Photo: Pixabay
Generative AI music company Suno has raised $125 million, one of the biggest music-tech funding rounds in history. Founders say they want to ring in “a future where anyone can make music”, but questions remain over whether Suno and its rival, Udio, have trained their models on copyrighted music without licensing deals.
Talk to anyone knowledgeable about music licensing and tech startups and the expression “ask for forgiveness, not permission” is almost certain to arise. The idea is that many music-tech startups end up using unlicensed music for their products — both because they cannot afford licences until they grow and because big labels and publishers may not give them the time of day until they grow. So, founders make the bet that by the time their startup grows big enough to get the music industry’s attention, it will be in rightsholders’ best interest to keep their music on the product and the startup will have more leverage to negotiate licences.
This is yet another case of misaligned incentives. If one music-tech startup seeks permission, it risks losing the race to another startup that asks for forgiveness. Not only that, but many music-tech platforms get at least partial protection from the Digital Millennium Copyright Act (DMCA). So in the end, early-stage companies often do not have enough business incentives to do the right thing. And the more that the music industry and technology become entwined, the bigger a problem this will be.
Which brings us to generative AI. This time in the music licensing merry-go-around, the stakes are higher for everyone involved. Many of the biggest companies behind generative AI music models are going to court with the stance that they should not have to pay music rights holders to train on their catalogues at all. There are also new arguments at play. Many gen AI companies say that models could not exist without training on vast catalogues of information, which is impossible to do when you have to ask permission for everything, and that we need them to exist for, well, the sake of technological and cultural progress.
It seems unlikely that the courts will side entirely with the tech companies — if they do, the business around intellectual property might all but collapse. Meanwhile, music rightsholders do not want to shut generative AI down — they simply want their cut of the pie. But even if labels and publishers get what they want, these models have already been trained, and it may not be possible to remove any one artist or songwriter’s work. So, given that opting in will never be unanimous, even the best case scenario for the music industry is mired with issues.
Featured Report
Music subscriber market shares Q3 2023 New momentum
This report presents MIDiA’s music subscriber market shares report and various MIDiA consumer surveys. All music subscriber data in this report refers to third quarters of each respective year. Numbers...
Find out more…Of course, to say that ethical training is impossible is not only lazy, but also unfair to the companies that are actually doing it (including ones certified by Fairly Trained). And the tech world’s “move fast and break things” ethos is coming under fire more and more. As The Information pointed out in a newsletter about OpenAI’s public skirmish with Scarlett Johansson, “acting naughty becomes increasingly perilous when you’re running an $80 billion-plus company”. But getting the incentives aligned so that startups actually take the ethical road has so far been a challenge. What can be done?
Labels and publishers could be more involved in the music-tech startup space from the beginning. Think of Abbey Road Red or Universal Music Group’s just-launched UMusicLift. YouTube has partnered with Universal Music Group for its AI initiatives (even as Google argues that it does not need to pay for training). Yet so far, the tech world still seems to feel that it is held back by the music industry’s involvement
Labels and publishers become the tech companies. While the Western labels are busy suing the models, HYBE is building its own and actively training it on HYBE catalogue
Music licensing law changes. This is the most difficult solution of all, but also the most necessary. The DMCA is arguably the most important law for governing music on social media, despite it being enacted before social media existed. There is still little consensus on the basic rights frameworks for derivative works. The struggle here is satisfying the delicate balance in copyright law between protecting copyright owners while still enabling future creativity
These challenges are not going to be solved in this blog post. But alongside the specific case of generative AI music models, we need to be devoting attention to the overarching, long standing issue at hand: how music-tech startups are disincentivised to work with the music industry. Otherwise, even companies with the best intentions can struggle, and nice guys deserve to finish first.
The discussion around this post has not yet got started, be the first to add an opinion.