The first pop song written and composed entirely by AI was ‘Daddy’s Car’, a fun, if somewhat clumsy, pastiche of The Beatles. Whimsical, upbeat and gently psychedelic, it wouldn’t sound totally out of place on any of the late-60s’ albums. The chord progression captures something of McCartney’s songwriting, and, if you strain your ears, the beat nearly resembles the odd accenting peculiar to Ringo’s drum style.
Since 2018, with the emergence of large language models (LLMs) – AI trained on huge amounts of unlabelled data – there has been a growing number of convincing pop songs produced entirely by machines. Most notable was ‘Heart on my sleeve’, a song featuring AI vocals purporting to be Drake and the Weekend, which was so convincing that it went viral on TikTok before being pulled off streaming services for copyright infringement. Written, composed and produced entirely by AI, the track goes further than earlier efforts such as the artificial influencer Miquela’s ‘Not Mine’ and the songwriter Taryn Southern’s ‘Break Free’, both of which were at least partly written by human musicians.
Having spent too much time listening to AI music over the last few months, I’ve realised – somewhat too late – that it’s actually hard to like any of it. Almost without exception, the songs have an uncanny sterility that creates an overwhelming feeling of sameness. I can’t help but feel that I’m not really listening to music but being amused by a shiny gimmick, that my enjoyment is not of art itself but of technology’s virtuosic but necessarily blank performance of existing artists, styles and genres, its capacity to ape past sounds rather than produce anything genuinely new.
[book-strip index="1" style="buy"]
Pop music has always thrived on artifice and simulation, often in ways that are exciting and generative. But AI pop music tends to dwell in the worst excesses of homogeneity. Take the pop songs generated by Aiva, built to create ‘epic emotional ensembles’ that in reality sound more like the pre-departure muzak on a British Airways flight. Perhaps not entirely surprisingly, its emotional register resembles that found on Spotify’s ubiquitous ‘Chill’ playlists, a mood that critics have described as ‘emotional wallpaper’. Familiar and unchallenging, ‘Chill’ denotes a kind of aesthetic experience that requires little attention for enjoyment but nonetheless promises satisfaction. The point is to provide pleasant listening with no tension or dissonance: tranquillity, but a form of tranquillity that still requires the listener’s connection to the attention economy. The sonic experience is one of cool vacuity. Music without dissonance, depth or interesting dynamics.
One of the more compelling projects is Google’s Music LM, a large language model that generates music from rich captions such as: ‘A fusion of reggaeton and electronic dance music, with a spacey, otherworldly sound. Induces the experience of being lost in space, and the music would be designed to evoke a sense of wonder and awe, while being danceable.’ The result bears a remarkable likeness to the given text description: a reggaeton beat with a pensive synth melody and pitch shifted vocals. It’s a bland, somewhat awkward, rendition of something you’ve heard before.
The creators of these pastiche machines are not entirely unaware of the problem of sameness, though they tend to see the problem as legal rather than aesthetic. Google has held off on publicly releasing Music LM, apparently fearing potential copyright issues. The company admits that around 1% of the music the software generates is indistinguishable from its original source material. Major labels are understandably anxious, seeing a threat to their business model as menacing as Napster, with the Universal Music Group ruffled to the point of forcing streaming services to pull the aforementioned Drake and The Weekend track.
But, for those of us who care little about the fortunes of Universal, the problem is less about profit margins, more music quality. AI – even when trained on enormous data sets – still treats the music as data, from which it learns how to perform a relatively narrow set of functions, the main one being repetition. The result is often a familiar – though sometimes jarringly gauche – collage of past rhythms, melodies, harmonies, dynamics and chord progressions. Most music AIs are dedicated to a particular genre, style or ‘feeling’: Aiva creates ‘epic emotional’ soundtrack music; and Jukebox generates ‘catchy’ pop songs. For now, the ambition seems to stretch only to creating more of what already exists, to immerse the listener in familiar, reassuring sounds.
[book-strip index="2" style="buy"]
Their function, then, resembles that of streaming platforms such as Spotify, which are made to serve us more of what we supposedly want. This endless recursion, whereby our past habits are taken to be what we want in the future, ultimately generates a kind of cultural time loop, in which the aesthetic realm is narrowed down so efficiently to recognisable sounds and styles that the new can barely emerge from the dead weight of the past. If the role of music today is to simply surround our days in the warm moods of yesteryear, why not get an AI to do it?
Yet, there is something rather perverse about training machines to do the things we like doing, the activities that make life worth living, when machines are still unable to do the mundane work of changing a bed or driving a car. The world envisaged is one in which the majority of humanity labours in poorly paid service work while the machines soundtrack our toil with bad renditions of Duran Duran.
Musicians are understandably concerned about the prospect of machines entirely sidelining the richness of artistic innovation. ‘With all the love and respect in the world, the track is bulls*** and a grotesque mockery of what it is to be human,’ the songwriter Nick Cave wrote in a blog post to a fan who’d sent him song lyrics ‘in the style of Nick Cave’, generated by ChatGPT. ‘Songs arise out of suffering… as far as I know, algorithms don’t feel… Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.’
Though one might take issue with Cave’s rather gloomy appraisal of the nature of songwriting, his concerns are worth taking seriously. The algorithms behind software such as Aiva are built to exclude the messy, uncodified realm of aesthetic experience, what the Ancient Greeks called Aisthitikos: perception via feeling. Susan Buck-Morss, in her essay on Walter Benjamin’s theory of aesthetics, describes it as ‘a form of cognition achieved through taste, touch, hearing, smell –the whole corporeal sensorium.’ AI’s cognition is not yet – and may never be – sensory in the complex way described by Buck-Morss, but remains the result of a series of data inputs and outputs, set by parameters ultimately chosen by human agents. It does not taste, touch, see, smell or hear like a human artist, who over the course of their life will likely have experienced a great many joys, losses, sounds and flavours - the intensely raw stuff of artistic production.
[book-strip index="3" style="buy"]
Indeed, many philosophers of aesthetics have linked the play of the senses to the arousal of the new. For Herbert Marcuse, it’s the prelinguistic realm of sensuousness, pleasure and fantasy that gives art its utopian horizon. The problem perhaps, then, is that AI treats sound too rationally. It does not experience sound on a prelinguistic, bodily level as bass thumps in the chest, or as melancholic shivers down the spine. It does not know the abysmal grief of losing a loved one. Visceral bodily experiences can, of course, be manipulated for less than utopian ends – for instance, the steroid-jacked EDM of the late 2000s that used sub-bass sounds to whip crowds into a kind of anhedonic euphoria. But Skrillex is probably not what Marcuse had in mind.
Yet, it seems fair to wonder whether the problem is less the technology itself, more its subordination to a culture trapped in ever-tightening circuits of nostalgia. One need not fully rehearse the argument that capitalism in its geriatric stage, having so thoroughly reified and dehistoricized our cultural past, has developed a memory problem. Worth noting, however, is that as crises accumulate, there are ever greater profits in accelerating our melancholic escape into a simulated past. Think here of Netflix’s eternal return of the 1980s in shows such as Stranger Things, or TikTok’s endless recycling of fashion trends. The more we mire ourselves in this past the more we seem to forget the future. Unless capital suddenly changes its relationship to culture, AI will largely be used for more of the same. Elon Musk is many things but he is no aesthete.
And yet, there are instances of artists using AI to create genuinely futuristic, difficult, even utopian, sonic experiences. The avant garde electronic artist Arca used AI to remix her track ‘Riquiqui’ 100 times, offering an unsettling listening experience in which - quite literally - the future never properly arrives. The listener experiences being trapped in an endlessly recurring past as present – a disorienting and voluptuous maximalism of stuttering drums and whirring synths. As the track repeats with only minor variations, familiarity is pushed to ecstatic levels of strangeness, whereby the immediate past is no longer reassuring but surreal. It has an almost Brechtian quality of estranging the listener by revealing the absurdity of remix culture and, more generally, a culture algorithmically locked into endless repetitions.
If Arca’s ‘Riquiqui’ repels the listener from our eternal present, then Holly Herndon’s ‘Godmother’ forces us to encounter the ‘new’ in all of its unsettling, jittery manifestations. The track was generated by an AI called ‘Spawn’, which listened to the music of Jlin, Herndon’s friend and fellow artist, and then reimagined the music in Herndon’s voice. The result stutters, writhes and gasps into being, like a grotesque new-born coming to terms with the alien world around it. Weaving the machinic into the queasily sentient, the track feels as if something alive is emerging from the machine. At once hard to like and yet fascinating in the way that all things that stand outside regular aesthetic codes are, it is ‘weird’ in the sense that Mark Fisher uses the term, bringing ‘to the familiar something which ordinarily lies beyond it, and which cannot be reconciled with the “homely”’.
Herndon’s experiment dramatizes the struggle of the new to be born. In the spirit of some of the best modernist experiments, it confronts the audience with the dreamlike space of other, weirder, more beautiful futures that always haunt the march of technological development. Critics ask whether AI will ever be capable of appreciating beauty. If so, it would be the result of its maker’s agency. We need more artists like Arca and Holly Herndon to show the machine that, in the words of Charles Baudelaire, ‘strangeness is a necessary ingredient in beauty’.