Blog post

Labor not property: Why artists are fighting the wrong battle on generative AI

"The route from AI company profits to artists’ pockets need not be mediated through property rights, licensing companies and algorithmically controlled markets."

James Parker and Jake Goldenfein25 September 2025

Labor not property: Why artists are fighting the wrong battle on generative AI
This cover image depicts the display screen of the Suno-generated song 'created' using the text of this piece as the prompt. You can listen to the song by clicking the hyperlink where you see "Bad songs" in the essay.

Last year, automatic music generation finally blew up. Suno launched right at the end of 2023, followed by Udio a few months later, both to considerable hype. Very quickly Suno had over 10 million users, with some outputs amassing millions of streams. Udio was apparently making 10 music files per second, or 6 million a week. These startups, both founded by AI industry execs, built their generative AI models on huge repositories of recorded music without licences or permission from the companies that controlled them. Unsurprisingly, Universal, Sony and Warner - the three major labels that together control around 70% of the global music market - weren’t happy. And in June 2024, they sued, accusing Suno and Udio of ‘willful copyright infringement on an almost unimaginable scale.’

The labels were clear in their argument: ‘There is nothing that exempts AI technology from copyright law’ or excuses them from ‘playing by the rules’. Their ‘wholesale theft of copyrighted recordings threatens the entire music ecosystem and the numerous people it employs.’ And it wasn’t only the majors arguing that AI companies were violating copyright. In January 2025, Suno were sued again, this time in Europe by GEMA, a German collecting society and licensing body. That an industry intermediary like GEMA repeated the labels’ arguments isn’t surprising. But the idea that AI companies are ‘stealing’ copyrighted material when they train their models is increasingly widely held. In fact, it has become a central rallying cry of artists and creators in their resistance to generative AI.

Within months of Suno and Udio’s launch, for instance, two hundred high-profile musicians, including Billy Eilish, Stevie Wonder, Chuck D, and REM, had signed an open letter decrying the ‘predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem’. In February 2025, following news that the UK government was planning legislation to allow AI companies to develop their products using copyrighted work without a license, a group of 1000 UK Musicians, from The Clash to Max Richter, and from Jamiroquai to the Royal Liverpool Philharmonic Orchestra, released Is This What We Want? as part of a campaign by the UK creative industries to encourage the government to ‘enforce copyright laws’ and ‘allow creatives to assert their rights in the age of AI’. The album comprises 12 recordings of empty studios, the titles of which form the sentence ‘The British Government must not legalise music theft to benefit AI companies’.

The same argument is being run in other creative sectors too. In a recent piece for The Guardian, advocating a similar position for book authors, Anna Funder - author of the excellent Stasiland - and law professor Julia Powles put it like this: ‘Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital’. What should governments do in the face of this threat? Enforce and extend intellectual property rights via novel licensing regimes. Why? Because ‘books, as everyone knows, are property […] No one will write them if they can be immediately stolen’, Funder and Powles claim. ‘Not just our culture but our democracy will be irrevocably diminished… We won’t know ourselves anymore. The rule of law will be rendered dust’.

Politically, the impulse to resist generative AI and the companies behind it is entirely justified. But at the center of this vein of AI resistance, so clearly articulated by Funder and Powles, is the demand that creative work be recognized and protected as property. This, we suggest, is a problem. Artists and creatives have legitimate and urgent distributional claims. Generative AI has created a crisis rippling through the political economy of media, and AI companies’ appropriation of culture needs to be remedied. But drawing the battlelines around property and copyright represents a degraded political horizon.

 Below we describe how making copyright the terrain on which this battle is fought serves industry intermediaries like labels and licensing companies but is unlikely to help artists and creatives. In the name of taking down the AI capitalists, artists are being successfully enrolled by another corporate sector - the rights industry - to preserve an already broken status quo, in which full time artists earn poverty wages dribbled out by monopolistic platforms. This cannot be the boundary of our political imagination.

Does anyone remember copyleft? For musicians involved in that political fight, the problem wasn’t which powerful sector of the digital economy was able to make money from culture, but the corporate ownership and control of culture itself. Once upon a time, intellectual property was the problem. Information wanted to be free. Files were shared. Metallica and the major labels were the bad guys. Aaron Swartz was a hero. Musicians like Wiley and lawyers like Lawrence Liang invited us to Steal This Film. Piracy was politicised.

Not anymore. Or at least not in conversations about generative AI. But for those of us who still remember the Napster/Limewire/Pirate Bay/Rapidshare/Mediafire/Aaaarg/Libgen era, it’s hard not to find arguments venerating copyright and the commodity status of culture a bit depressing. Are we all just Metallica now?

It’s worth remembering that musicians were at the forefront of the file sharing wars of the early 2000s, and that they’ve been outspoken and organized against the streamers too. The political energy that has emerged in reaction to AI is remarkable, but it desperately needs an alternative vocabulary to resist the political economy of media being rearranged according to the imagination of AI companies, the major labels or some combination of the two.

 [book-strip index="1"]

This is not nostalgia for an earlier internet era of politics. It’s a conceptual claim: that the current copyright push is not only about enforcement but also extending property rights and commoditization into elements of culture that have never previously been ownable. It’s a material claim: that the focus on copyright sidelines how creative output is a form of labor that could found artists’ claims for better rights and conditions as workers. And it's a civic claim: that we need to transform generative AI platforms and the data infrastructures underpinning them into public utilities - like water, libraries or the BBC; resources managed in the public interest.

This essay runs these arguments in relation to music, but we hope many of our claims ring true throughout creative industries.

 The lawsuits

Normally, when a musician or rights holder makes an allegation of copyright infringement, the claim is that their composition or recording has been published, performed or reproduced either in whole or in part, without license or permission. This is not the case in the lawsuits against Suno and Udio. The problem, according to the majors, isn’t that users of Suno and Udio were able to access existing works by, say, Chuck Berry or Jason Derulo, or even that they were able to generate outputs that sounded like or replicated them. Yes, it’s possible to use these platforms to generate infringing outputs, in the same way that Ed Sheeran can write a song that sounds too much like Destiny’s Child. But that isn’t the claim being litigated.

Instead, the claim is that Suno and Udio made unauthorized copies of recordings owned by the majors for the purposes of training their neural networks. And it’s true, they did. ‘Suno’s training data includes essentially all music files of reasonable quality that are accessible on the open Internet,’ the company explains, ‘abiding by paywalls, password protections, and the like, combined with similarly available text descriptions.’ 

Much has been made of this supposed ‘concession’ by the AI companies. But it’s not a concession at all, because as Suno and Udio point out, the use of copyrighted material ‘as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product’ has been deemed ‘fair use’ in America since 1791. It’s called an ‘intermediate’ copy. And its legality has been affirmed time and again.

Once this initial act of copying is complete – when all the music on the Internet has been gathered into a dataset – the process of training a neural network and then generating new material with it has little to do with ‘originals’ and ‘copies’ at all. ‘The model underpinning Udio’s service’, the company explains, ‘is not a library of pre-existing content, outputting a collage of “samples” stitched together from existing recordings. The model does not store copies of any sound recordings. Instead, it is a vast store of information about what various musical styles consist of, used to generate altogether new auditory renditions of creations in those styles.’ These AI firms claim they aren’t building a remix machine. They’re building a statistical model of ‘how music works’.

This isn’t just industry rhetoric designed to circumvent existing legal and economic protections. It’s how this technology actually functions. Suno and Udio don’t copy, reproduce, sample or even imitate existing individual songs. First, they render those songs as ‘data’, then they analyze the relationships between these data in aggregate to produce statistical information about musical form, style, genre etc. writ large – ‘what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on’, as Udio put it – so that this information can then be used to generate new songs. Bad songs maybe, but new ones nonetheless. These AI models don’t treat musicians as musicians as such. They treat musicians more like producers of data about the building blocks of music. 

Winning this fight in copyright means those building blocks – produced collectively across generations by vast communities of musicians and their listeners – can also be owned. This is what it means to say that you, a rock star, deserve to be compensated when a neural network is trained on your back catalogue: that, by virtue of these recordings and their contributions to the genre, you have accrued some kind of property right in rock music itself, and that this property is what’s now being ‘stolen’.

Promoting copyright as the anchor around which to organize licensing markets for AI music generation thus means more than simply applying or enforcing existing copyright norms. It means radically extending them, bringing those basic building blocks of music – things like harmony, rhythm, genre and style – into private hands. The propertization of music through copyright was initially configured to preclude ownership of those musical building blocks, but not to resist appropriation. Rather, style and genre were rendered unownable precisely to enable musical commoditization in ways that privileged certain groups to more freely extract from culture.

Copyright

The copyright system that developed in Europe and America across in the 18th and 19th centuries emerged partly in response to the industrialization of two new technologies: the printing press and political liberalism. What this system valorized was a specifically European, highly individualistic understanding of artistic expression (the myth of the lone creative genius), which granted authors the exclusive right to control the reproduction of their printed works – literally the copy right – for a set period. 

Originally, this right extended to complete works only, but over time it grew to include extracts, adaptations, and other derivative works. When is a work ‘derivative’? What parts of a work does an author own, exactly? Courts and legislatures have answered these questions very differently at different times and in different jurisdictions, but the general principle is that copyright protects specific instances of creative expression – this novel, song, software, painting, play or some substantial part of it – not the ideas being expressed, and not the mode of expression. You could copyright a book, for instance, about a man who gets castaway on an island. But not the idea of getting castaway on an island, or the novel as a literary form, or the adventure genre, or any of the themes, tropes, historical details or narrative techniques used along the way. 

Later, once the system had been extended to include sheet music and eventually audio recordings, you could also copyright a ‘fanfare for the common man’: its specific melody, harmonic and rhythmic features, as well as particular recordings of its performance. But an artist can’t copyright the fanfare as a musical form and social practice, nor the key of Bb major, nor the principles of modal harmony, nor the sound of trumpets, horns, and timpani, for example, nor their associations with the military and a certain idea of America. For the purposes of the law of copyright, these are all understood as part of the cultural commons: a collective resource waiting to be transformed into private property by the transformative work of authorial labor.

This system may have worked well enough for figures like Defoe and Copland: white, male representatives of the dominant culture. The same cannot be said, however, for the diverse communities on whom copyright was imposed, only to be told that their entire creative lineage was part of a ‘commons’ and therefore free for the taking. It was precisely this feature of copyright that enabled white musicians to appropriate and financially exploit historically black genres like gospel and the blues in the name of rock’n’roll. Take everything you want from Sister Rosetta Tharpe, Muddy Waters and Little Richard, the logic of copyright said to Elvis, Janis Joplin and the Stones – take all their formal and stylistic features, their instrumentation, ways of playing, attitude and vibe – so long as you don’t take the lyrics or melody (though often, in practice, you could take them too). Take everything you want from Indian classical music (the Beatles), South African isicathamiya (Paul Simon), Jamaican reggae (Ace of Base, UB40), Congolose soukous (Vampire Weekend), or Brazilian baile funk (Diplo), so long as it’s not the ‘content’ of individual creative works, which rights holders – if they are sufficiently rich and powerful – will relentlessly protect.

Well, now copyright’s chickens are coming home to roost. What Suno and Udio are doing is exploiting the features of copyright as a system of value production that initially privileged a few musicians and labels across the twentieth century, but in reverse. This time, they’re using neural networks to commodify what copyright’s initial commodification of lyrics and melody left behind. That doesn’t make it right. It is clearly an exploitative appropriation of culture. But it should provoke some wariness as to what it means to appeal to copyright as a solution to the generative AI problem – it means arguing that those cultural commons are in fact ownable, and owned already, not by the communities that created them, but by the labels.

 [book-strip index="2"]

Licensing

Arguing for copyright means arguing for licensing regimes explicitly trading on these new property rights in culture. This is what the representatives of creative industries say they want. In March 2025, the Recording Industry Association of America, along with nine other creative industry bodies, advocated for a system whereby developers would have to ‘obtain appropriate licenses for any copyrighted works they use to train their AI models’, negotiated according to ‘free market principles’. Both GEMA, the rights body suing Suno in Europe, and Make it Fair, in the UK, have proposed something similar. That would be a victory for record labels and licensing intermediaries, but would it be a victory for artists and creators?

In the long-term, the expansion of copyright is much more likely to entrench an already broken status quo that benefits labels, platforms, and other industry intermediaries over artists. The datasets used to train generative AI systems are massive, and nobody expects artists to negotiate individually. Because these licensing deals represent a significant potential profit stream for both record labels and streaming platforms, existing inequalities of bargaining power will play out here too. Making a deal with a label, or hosting website, or platform, and want to opt out of training AI? Too bad, no deal.

And how should license fees be allocated anyway? By volume of output? So that the more ‘traditional Irish folk’ recordings you’ve made, the more of the genre you’re taken to own, even though its roots go back at least a thousand years before the invention of recorded music? So that Joy Division and My Bloody Valentine, with their three records each, would be compensated less for their contributions to our collective music culture than their infinite imitators? Or maybe license fees should also be allocated by ‘streamshare’? So that the 10 most streamed artists worldwide would earn around 36% of generative AI license fees too? So that Taylor Swift would receive more than the entire recorded history of jazz here as well? Or perhaps we need a new metric like ‘generation share’? So that the more Richie Hawtin-style minimal techno you generate, the more Richie Hawtin gets as a percentage of the total license fee? In which case, what about Daphne Oram, Kraftwerk, Steve Reich or the many other progenitors of the genre? And why would artists trust labels and other intermediaries to be able to negotiate good deals for them here, when they’ve so spectacularly failed them in relation to streaming?

These questions will get even more complex as the market is flooded by AI generated music, along with infinite shades of human-AI hybrid. Don’t forget that the express intention of companies like Suno and Udio is to automate the production of so-called ‘perfect fit content’ and other functional or semi-functional music already being pushed by the streamers – ‘lo-fi beats to study to’; hold music; mall music; the music in the background of YouTube ads and Instagram reels – so that what we hear when we’re only half-listening will increasingly be a kind of infinite AI slop, driving down royalties to flesh and blood artists in the process.

License fees won’t be enough to compensate, because that’s the whole point: if license fees were equivalent to royalties, why bother? Companies like Suno and Udio don’t exist to solve a creativity problem, critical technology theorist Marek Poliks explains, ‘they exist to solve a licensing problem. The goal is to build a world where a mood playlist could be more or less generative and license free.’ As the proportion of total stream share generated or partially generated by AI increases, it will feed back into the datasets used for training. And then what? What happens when more music is generated by AI than not? Or when entire genres evolve around particular kinds of prompts, as they surely will?

In fact, it’s difficult to interpret the antagonism between music industry and AI industry here as anything but a temporary blip, a kind of reset so the major labels can get their houses in order, and work out how to better synergize their business models with platforms and absorb automatic music generation. That means more opaque algorithmic rankings, calculations and distributions (in other words, the radical intensification of musical bureaucracy) and the further hybridization of artists’, labels’ and AI companies’ interests in line with the prerogatives of platform capitalism. 

This isn’t just a bad idea: a de facto privatization of the musical commons. It won’t even work. If most Australian full-time musicians earn between $6000 and $15,000 a year now, with only 15% of that from streaming, they shouldn’t think that supporting copyright will make that amount grow in the future. Artists are arguing in support of labels, who straddle a very contradictory position. On one hand, they want artists to make money so they can take their cut, but on the other they own and control the distribution platforms, which are ruthlessly strategizing to reduce the money they distribute to artists. That contradiction is unlikely to resolve in favor of artists and creatives. The input side of the AI market is cooked.

Labor 

It’s unclear how things will go from here. In the courts, victory for the labels looks unlikely, at least when it comes to model training. That may be why labels are pivoting their copyright claims away from the idea that model training itself is a breach, and towards simpler claims that AI companies downloaded pirated content to build their datasets. But these would be quick wins with little broader structuring effect on the political economy of AI or music. Legislatively, all the momentum seems to be with the tech companies and their lobbyists, who have convinced governments that stronger copyright protections will stifle AI innovation and undermine national economic interests. But good quality data is the foundation of competitive advantage in the AI economy. It’s what makes one AI product better than another, and despite their resistance in litigation, some AI firms are clearly willing to pay

For instance, in September this year, ElevenLabs - one of the industry leaders in automatic voice generation - announced a major expansion into music. Eleven Music will apparently allow users to generate ‘studio-grade music’ from ‘natural language prompts’. And unlike Suno and Udio, it has struck deals with two licensing companies: Merlin, which works primarily with independent labels like Sub Pop, Domino, Warp, and Ninja Tune; and Kobalt, which claims to represent, on average, ‘over 40% of the top 100 songs and albums in the US and UK’.

Maybe this is a sign that the tides are turning in favor of copyright. Or maybe it’s just a canny move by ElevenLabs to muscle into the music business while they wait for things to play out elsewhere. Whatever the reason, it isn’t great for artists. By asking that their artistic outputs be protected as property rights, artists are asking to participate in the ‘free market’ that the recorded music industry associations are pushing for. That puts artists at a massive disadvantage in bargaining power, and effectively mandates, in the monopsony markets for culture, that all outputs be immediately assigned to intermediaries on whatever terms they offer. There are alternative political strategies.

If, for instance, artistic output in the AI context were instead configured as a kind of labor, then artists and creatives would be operating in a far more protected market context, with better minimum standards and more bargaining power[GR3] . In fact, as UCLA law professor Xiyin Tang argues, we should already be thinking about intellectual property law as a type of labor law, and a highly abusive one at that. Invoking labor law and advocacy could correct that. These kinds of arguments are proliferating more and more.

In Mood Machine, Liz Pelly’s recent and immediately canonical book on Spotify, she describes the development of a ‘new music labor movement’ in response to the systematic exploitations of the streaming economy.

The United Musicians and Allied Workers (UMAW), for instance, formed in the US in 2020, at the start of the pandemic. One of its first campaigns, for Justice at Spotify, was a call to ‘raise the average streaming royalty from $0.0038 to a penny per stream.’ More than anything, the campaign was an exercise in consciousness raising. Here is Damon Krukowsi, a founding member of UMAW, author, and member of Galaxie 500, as quoted by Pelly:

Demanding a penny per stream was a strategy to get people to say back, ‘Don’t you already get more than that?’ And then you get to say, ‘No, we make a third of that at best.’ And even less if you’re on a label. Many of us make zero. It did change the conversation. The message got through though: we’re being paid nothing.

The need for such policies is only exacerbated by the advent of automated music generation. Fighting for this on the terrain of labor law gives artists and creators a much better starting position and more bargaining power than fighting for it on the intellectual property front.

Some proposals go considerably further. In March 2024, UMAW announced its Living Wage for Musicians Bill, the purpose of which is to produce ‘a new royalty from streaming music that would bypass existing contracts, and go directly from platforms to artists’, cutting out the intermediaries, with an aim to ultimately pay them a living wage. These arrangements would also challenge the platforms’ opaque algorithmic control over financial distributions. On 29 September, Representative Rashida Tlaib will reintroduce the bill to Congress. Advocating for a universal basic income has been the official stance of the UK Musicians Union since 2021, Pelly points out, and several countries already pay artists special (un)employment benefits (France, Ireland) or fund comparable grant and salary schemes (Norway) directly from tax revenues. 

These political frameworks should be the basis of artists’ fight against AI companies for distributive justice. The route from AI company profits to artists’ pockets need not be mediated through property rights, licensing companies and algorithmically controlled markets. You could think of schemes like this as a type of welfarism. Or you could think of them as a way of acknowledging artistic production as socially valuable labor and a collective good, for which a basic minimum compensation is owed. If we accept that AI is here, and that datafication is happening, then a collectively managed regime for an artist's wage - paid for in part by AI companies - could become not just cultural policy, but a fundamental element of national technology policy too. After all, if AI companies make money from exploiting artistic production as data production, then why not direct a large chunk of that money to artists for their (data) work?

It would be infinitely better for such payments to be fixed and managed transparently through a collectively governed mechanism or public commission than leaving giant multinationals to compete over who can squeeze the most out of artists. Make AI companies pay a transparently calculated, fixed basic income to those they profit from on the basis of the work they do! Paying artists a living salary for their work would mean a collectively managed regime for the production of data in service of a socially beneficial technical infrastructure. That may feel impossible, or at least a long way off. But acknowledging the connection between artists’ labor and data production, and integrating that into the labor struggle, is surely a critical first step.

 

 

 

 

James Parker is an Associate Professor at Melbourne Law School, where he works across legal scholarship, art criticism, curation, and production. He is an Associate Investigator with ADM+S, a former ARC DECRA fellow, former visiting fellow at the Program for Science, Technology and Society at Harvard, and sits on the advisory board of Earshot, an NGO specialising in audio forensics. Since 2020, he has been working with curator Joel Stern and artist Sean Dockray on Machine Listening, a platform for research, sharing, and artistic experimentation, focused on the critique of new and emerging forms of listening grounded in artificial intelligence and machine learning. You can find his previous writing on machine listening and artificial musician generation here.

Jake Goldenfein is a law and technology scholar at Melbourne Law School and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society. He studies platform regulation, data economies, and the governance of automated decision-making. Alongside his academic writing, he has written for Phenomenal World, Jacobin, Public Books, and the Law and Political Economy Project.

 

Book strip #1

  • Quasi Una Fantasia
    This collection covers a wide range of topics, from a moving study of Bizet’s Carmen to an entertainingly caustic exploration of the hierarchies of the auditorium. Especially significant is Adorno’...
    Paperback
  • Late Marxism
    In the name of an assault on “totalization” and “identity,” a number of contemporary theorists have been busily washing Marxism’s dialectical and utopian projects down the plug-hole of postmodernis...
    Paperback

Book strip #2

  • Amateurs!
    Since the nineties, platforms have invited users to create in return for connection. From blogs to vlogs, tweets to memes: for the first time in history, making art became the fundamental form of c...
  • Enshittification
    *** Longlisted for the Financial Times and Schroders Business Book of the Year 2025 ***Misogyny, conspiratorialism, surveillance, manipulation, fraud, and AI slop are drowning the internet. For the...
    Hardback