Spotify announced its Q4 2025 results yesterday and while the numbers were part of the story, the bigger story is that the earnings call was largely dominated by discussions about AI. No real surprise at one level, given bearish predictions about how AI will impact Spotify’s business has sent its share price off a cliff.

But it wasn’t just the questions from analysts, though at least ten of the questions submitted during the call were about AI. The real surprise was that the entire energy of the call, and the energy of the senior executives who participated, was dragged at by an ever-present AI undercurrent.

When Wells Fargo analyst Stephen Cahall asked Christian Luiga, Spotify’s CFO, what the market was missing about Spotify’s AI position, even Luiga - normally stern and staid, the adult in the room whose job is to damp down hype and deliver solid predictable numbers - opened his response by noting that AI “has been something that has been hard to grasp for many people”. 

He went on to add that - as co-CEO Gustav Söderström had already said earlier in the call - Spotify’s work on AI “didn't start now - we started many years ago”. When a CFO tells a room full of analysts that they just don’t get it yet, that tells you a lot. The implicit message is clear. You don’t understand this because you’re not on the inside. But don’t worry, we've got it. Relax.

And if you listened to what was said - and what wasn’t said - how Spotify views AI and the future of its business became abundantly clear. For Spotify, AI is as big an opportunity as the internet or the smartphone and, from where Spotify sits, AI generated music isn’t necessarily a problem, because the more content there is out there, the more important it is for a platform like Spotify to surface the right content to the right people at the right time.

The narrative behind that goes something like this.

I kissed a bot and I liked it

Undoubtedly, the defining moment of the call was when Söderström, part of the new “co-CEO” double act running the company, and the man responsible for Spotify’s technology and product, launched into an anecdote about a new love affair that started over Christmas.

In the middle of an otherwise normal earnings call, Söderström’s tone took on the heated fervour of a man in love. It all started, he said, with a “singular event” over the Christmas break - something that, by his telling, consumed his every moment, every thought.

“A lot of things happened in December”, he said, barely able to contain the passion in his voice.

While the rest of the world was losing its mind over ‘Heated Rivalry’ and working out how to perfect a hockey butt, Söderström had spent the Christmas break locked in his own heated exchanges with a model called Claude. To be clear, that’s Claude, Anthropic's AI model. It’s not that kind of Heated Rivalry.

Possibly the biggest thing in Söderström’s Christmas - and brace yourselves, it’s a biggie - was “Opus 4.5 coming to Claude Code”. That’s when things with him and Claude “crossed the threshold”, the relationship hit new heights, and “things just started working”. And, presumably, the point at which the rest of his family quietly closed the door and carried on without him.

"When I speak to my most senior engineers, the best developers we had, they actually say that they haven't written a single line of code since December. They actually only generate code and supervise it."

Söderström is so excited about what he and Claude can achieve together that Spotify has built an internal tool - bizarrely named ‘Honk!’ (the exclamation mark may or may not be part of the name, but you kind of hope it is). Using Honk!, a Spotify engineer, on their morning commute, can ask Claude to fix a bug or add a feature to the Spotify iOS app, receive a working build back via Slack on their phone, test it, and merge it to production before arriving at the office. 

Söderström insists he wasn’t the only one captured by Claude. “I think most people in tech” spent Christmas the same way, he said, a little wistfully. Course they did, Gussy.

A lot of things happened? Söderström was not kidding. The man ditched Christmas, ignored his family, and spent the holiday in a torrid back and forth with a chatbot.

His family is one thing - they’re probably used to it - but the guy you have to feel sorry for is Alex Norström. The co-CEO structure is supposed to work as a balance: Gustav handles technology and product, Alex handles the commercial side and the industry relationships. Gustav builds, Alex keeps the music industry onside. But now, just weeks into their new relationship, it becomes clear that Gustav has maybe found a new partner - and it isn’t Alex.

Poor Alex. Maybe he can find solace with Grok.

“When I speak to my most senior engineers, the best developers we had, they actually say that they haven’t written a single line of code since December”, continued Söderström. “They actually only generate code and supervise it”.

Söderström’s LinkedIn profile says that after his family, “Technology is my true love and Science is my religion”. His home, he notes, “barely ever works because everything is in perpetual beta”. This is a man who has always been like this. Which is great - the world should embrace geeks more.

But we've all been there: the quiet guy at work falls in love and suddenly he stops being so quiet. Every conversation is clumsily yanked around so that he can enthuse about the object of his affections. It’s bad enough when the guy in question is at the desk next to you, but when he is the (co) CEO of the most powerful music company in the world, the chances are you really do want to know. 

The breathless anecdotes. The inability to talk about anything else. The conviction that this changes everything. For anyone who has watched a friend disappear into a new romance, the pattern on this earnings call was unmistakable.

There is a growing conversation in the technology world about what some are calling “LLM psychosis” - the tendency of AI tools that are very good at producing plausible, confident outputs to push their users into overconfident acceptance of results they can’t fully audit. It is not a clinical term. But the pattern it describes is real and it is affecting serious people.

In December, David Budden - a former Director Of Engineering at DeepMind, with a PhD from Melbourne and postdocs at MIT and Harvard - publicly claimed to have cracked two of mathematics’ seven hardest unsolved problems using AI assistance. 

He staked $45,000 in public bets on the claim. The mathematical community’s response was swift and brutal. Budden felt compelled to tweet, “Not suffering from psychosis”. Prediction markets where tech bros bet their crypto on the likely outcome of events gave him single-digit odds.

Nobody is suggesting that Söderström has lost touch with reality. He is an experienced technology executive running a successful company. But the pattern he exhibited on this earnings call - the Christmas epiphany, the singular event, the threshold moment, the thrill that his best engineers have also seen the light, embraced the future and given themselves over to herding bots rather than writing code - is the pattern of someone in the grip of… something

And when that someone is responsible for the technology strategy of the platform that distributes $11 billion a year to music creators, the music industry might reasonably ask: is this enthusiasm or is this something else? And who around him is asking that question?

For the music industry, the immediate concern is what happens to everything Söderström is supposed to be paying attention to while he is hanging out at the back of the bus, feverishly sending messages back and forth with Claude to add more zing and pizazz to Spotify’s iOS app.

Will Spotify block 100% AI music? "It's not our decision to make"

Whimsy aside, there were some important points made during yesterday’s earnings call. Not least, Söderström laid out how Spotify thinks about AI-generated music, dividing it into two categories.

The first is what he called “net new music” - original music created using AI tools. Spotify’s position here is that it should not police what tools artists use. “Are you allowed to use an electric guitar, a synthesiser, a digital audio workstation, or AI?” he asked. “Or a more complicated question, a bit of AI, like 1% AI, 15, 20, 100? We don’t think it’s our decision to make”. 

Instead, Spotify wants metadata: labelling that tells consumers how the music was made, surfaced through features like ‘About the Song’. The tool, not the platform, decides what gets created. The platform just sorts it.

"We are ready for the partners that are hungry to seize this opportunity. We think the ones that move first will benefit the most."

The second category is “derivatives” - covers, remixes, reinterpretations of existing music using AI. Söderström framed this as “an untapped opportunity for artists to make money off of their existing IP”. He said Spotify has “the technology and capabilities” ready to enable this, and is waiting for willing rightsholders. “We are ready for the partners that are hungry to seize this opportunity. We think the ones that move first will benefit the most”.

There is an economic question buried in this framing that nobody on the call addressed. If derivative content attracts significant listening, the question of how that consumption will be accounted for becomes critical. Will derivative streams sit in the same royalty pool as everything else - in which case will Spotify’s margin on that listening be higher because the content costs less? 

Or is derivative consumption separated into its own pool, meaning the per-stream rate for original music actually rises as listening shifts? The outcome most beneficial to the music industry is unlikely to be the one Spotify voluntarily pursues.

Söderström takes the fifth on Spotify's AI flood

Rich Greenfield of LightShed Partners asked the most direct question on the call: what percentage of music on Spotify today is AI-generated and how much AI-generated content is being uploaded daily? Söderström refused to answer. "We don’t share a percentage of music uploaded on Spotify that is AI generated".

Deezer does. In January, the French streaming platform reported that over 60,000 fully AI-generated tracks are being uploaded to its platform every day - roughly 39% of all daily intake. Over the course of 2025, Deezer’s AI detection tool tagged 13.4 million AI-generated tracks. Up to 85% of streams on those tracks were fraudulent. 

Deezer and Spotify largely receive the same deliveries from the same distributors. If 39% of what arrives at Deezer’s door is AI-generated, there is no reason to believe Spotify’s intake looks meaningfully different. Söderström’s refusal to quantify starts to look less like a policy position and more like an unwillingness to share a number he doesn’t want in the earnings transcript. 

Or - more embarrassingly for Spotify - a number he simply doesn’t have. You can probably work out which is the more likely of the two at a company that places such reliance on data.

Söderström pivoted instead back to the tools-agnosticism argument (“Spotify should not decide what kind of tools you’re allowed to use”) and the spam defence (“there have always been people trying to abuse Spotify because it’s a big economy using spammy tracks - AI is a tool that could help accelerate that, but because it’s been a problem for a long time, we’ve been investing more than anyone else in the industry to curb this problem”). 

The implication is that AI-generated music, in aggregate, is either unremarkable or unquantifiable - and either way, not something Spotify is going to put a number on.

"While many people are scared in times of change, this is when there is the most opportunity."

Swerving the question tells us almost as much as if he’d given a straightforward answer. If the number were small, he’d say so. If Spotify had Deezer-level detection and a clean story to tell, he’d tell it. So either it doesn’t have the tech, or the number is far bigger than Deezer’s, or there’s a reason Spotify doesn’t want to focus on it. Or perhaps it doesn’t want to say because it doesn’t want to have to answer the next obvious question: where that content is coming from.

There is also a less comfortable possibility. If Spotify’s stated position is that it should not decide what tools artists use - that 100% AI-generated music is not inherently a problem - then disclosing the volume creates a different kind of pressure. 

Rightsholders would want to know why Spotify isn’t taking it down. By not quantifying, Spotify avoids that confrontation. It may also be quietly observing what happens when AI-generated content sits alongside human-made music in the same catalogue, the same playlists, the same royalty pool - running an experiment against hundreds of millions of users that will show it whether people keep listening.

“More content is good for Spotify”

“As more content gets created and uploaded, the personalisation problem becomes more important because now there’s a bigger catalogue. You need to understand individual users’ tastes even better”, says Söderström.

For Spotify, more music uploaded - including AI-generated music - increases the value of its personalisation layer. The narrative Spotify wants people to believe - investors, music industry and consumers - is that only Spotify can save us from overwhelm. 

The platform that sorts the music becomes an increasingly essential part of the value chain. Every additional track, whether made by a human in a studio or generated by a prompt, deepens the problem that only Spotify can solve at scale.The argument is that it doesn’t matter how much music there is, because Spotify can point listeners to the stuff they want. 

That’s fine - if you can trust Spotify to do the right thing. 

"A growing catalogue has always been very good for us because it attracts new users, drives engagement, and builds fandoms."

But even if listeners don’t stick with 100% AI-generated tracks, the effect is potentially dilutionary, leeching value from the royalty pool. More to the point, the interests of artists and songwriters and Spotify are directly opposed on this issue, but Spotify frames them as aligned. 

For Spotify more content is something the algorithm can sort out. For artists, that algorithm is yet another gatekeeper to battle in the pursuit of getting music in front of fans. 

Söderström addressed this casually. “When Spotify started, I think there were at most tens of millions of tracks. Now there are hundreds of millions. So the 10x explosion has already happened over the last 20 years. So this is something that we’re used to”. But a tenfold increase in catalogue over two decades is not the same as a potential orders-of-magnitude AI flood over a matter of months. 

Spotify’s building an AI moat: the aggregation thesis

Söderström’s most revealing comment came when Justin Patterson of KeyBanc asked about AI’s impact on engineer productivity.

“There is this fear that software companies are not gonna exist anymore, everyone rolls their own products”, he said. “I certainly don't think that’s going to be true for consumer products. I think what will happen is something more like what happened with the internet”. 

“When the internet came along, everyone thought that we would all have our own web pages. What actually happened was there ended up being very few web pages. In times of lower friction, things actually tend to aggregate, not disaggregate".

Applied to music this means that as AI makes content creation nearly free, the result isn’t a thousand independent platforms. It’s consolidation around the platforms that already have distribution, data, and an existing business model. Spotify becomes more dominant, not less. The flood of content doesn’t drown the platform. It makes the platform the only thing still standing above the waterline.

This maps to something else Söderström has been building. He described, in response to the first AI question from Jessica Reif Ehrlich of Bank of America, a dataset that Spotify is constructing: one that maps natural language to musical preference.

"We want to do something different. We want to build something that never existed before."

“We’ve had the song-to-song dataset, but no one had the language-to-song dataset”, he said. “This is a very specific dataset. You may think it is a canonical dataset, meaning there is a factual answer to, for example, what is ‘workout music’. There is no factual answer to what is workout music. In fact, it turns out that taste is not a fact, it’s an opinion”.

He gave the example that workout music for an average American is usually hip-hop, for a European usually EDM, for many Scandinavians heavy metal or death metal - “but then again, for a lot of Americans, millions at least, it’s also death metal”. The point: you need hundreds of millions of listeners across global markets, constantly telling you what language means in terms of musical preference, to build this dataset. “This is the dataset that we are building right now, that no one else is really building”.

This is the moat claim. It is also the bridge to generative AI and it is worth following the logic to its conclusion. If you can model what someone wants to hear from a text description, you can either retrieve it from the existing catalogue or generate it. The infrastructure is the same either way. 

Söderström did not make this connection explicit on the call. He did not need to. Former CEO Daniel Ek did it for him. In his farewell remarks before transitioning to his new role as Spotify’s Executive Chair, he reminded the audience that Spotify acquired The Echo Nest back in 2014, “when most people didn't understand why a streaming company needed a machine learning AI company”. 

That bet, he said, “gave us personalisation, something that’s now core to everything we do”. He wasn’t being nostalgic. He was telling you what comes next.

There is also an important question of how AI-generated content reaches listeners. Most people don’t have the patience to tinker with generative music tools. But Spotify doesn’t need them to. It can generate pre-written prompts based on its granular understanding of a user’s listening preferences - genres, likes, skips, replays.

That gives data granular enough, and personalised enough, that a single click produces a prompt that the user can use to give AI-generated music tailored exactly to their taste. The user doesn’t need to know how to prompt - Spotify already knows what they want. The line between surfacing existing music using prompts and generating new music using prompts gets thinner every day - to the point where, one day, perhaps, users can’t tell whether they are prompting to surface or prompting to generate. 

"Alex here told me that the Chinese sign for macro wind is opportunity. So we’re going to try to capture that opportunity”.

Spotify knows what its data is worth, and it has acted accordingly. In November 2024, the company abruptly cut third-party developer access to its recommendation features, audio analysis data and track characteristic endpoints through its Web API - with no advance warning, destroying months of work by independent developers. The widely held view, reported by TechCrunch among others, was that Spotify was shutting down the pipeline before AI companies could use it to train competing models.

Developers in Spotify’s own community forum were blunt. “Let’s be real here, this isn’t about security or user privacy, this is about data being used for training AI models”. Then in December last year, Anna's Archive scraped metadata for 256 million tracks and 86 million audio files from Spotify - representing 99.6% of all listens on the platform - and released the archive publicly. 

Spotify shut down the accounts involved, implemented new safeguards, and - together with the majors  - filed a $13.4 trillion lawsuit against the archive’s anonymous operators. The door is locked. The data stays inside.

But consider the workout music example on its own terms. Spotify has hundreds of millions of users telling it, through their behaviour and increasingly through natural language, what ‘workout music’ means to them personally. There is no canonical answer, as Söderström said - it varies by country, by individual, by mood.

Right now, Spotify uses that data to retrieve the right track from its catalogue. But if Spotify were to generate the perfect workout playlist using its own AI tools - or acquire a generative AI music company to do it - the economics change fundamentally. There would still be a cost - you’d have to pay royalties to the rightsholders on whose works the model was trained and on the exploitation of the output. But the economics would look very different to premium human-created content.

Whether that difference preserves value for human creators or preserves margin for Spotify depends entirely on who sets the terms.

"In times of lower friction, things actually tend to aggregate, not disaggregate."

In theory, there’s a middle ground. In practice, Spotify’s track record in adjacent areas - its approach to audiobook licensing, its lobbying against the Copyright Royalty Board’s rate increases in the US, its introduction of Discovery Mode payola - does not suggest a company that splits the difference when it has leverage. After all, Spotify has a fiduciary duty to its shareholders, not to the music industry's stakeholders. It would be naive to expect it to act against that duty out of goodwill.

And here is the uncomfortable truth for the music industry: Spotify is probably one of the very few participants in the value chain with the power to define what the AI licensing relationship looks like. It literally wrote the rules for how streaming was monetised. It can write the rules for how AI training and derivative exploitation are monetised too.

The leverage has shifted. There was a time when Universal said “jump” and Spotify said “how high”. That era is over. When Universal went to war with TikTok over licensing terms, the assessment by many was that TikTok won - and the music came back. Spotify is now in an even stronger position. 

It pays out $11 billion a year to the music industry. If Spotify embraces AI and sets the terms, the worst threat the majors have is to pull their catalogues. But when such a huge proportion of the recorded music industry's revenue flows through a single platform, pulling your content isn't a negotiating tactic - it’s self-harm.

The labels know this. And Spotify knows the labels know this. And if you’re a listed company - like Universal or Warner - your share price would take such a hit it might quite possibly never recover. 

That said, in reality, neither side wants a war. But the risk of war is asymmetric. Spotify could weather the storm. It would take a short-term hit, face some negative press, but churn takes time to materialise and - assuming the war ended in some sort of negotiated truce - the share price would recover, and possibly come back even stronger.

For Universal Music things would be very different. Of the three majors, it is probably the most exposed. Sony Music is part of a bigger venture, and Warner Music has already established the beginnings of its own AI play. And so for Universal, there would be an immediate and potentially fatal blow. 

"We want to do something different. We want to build something that never existed before."

If, as part of a negotiation war it pulled its catalogue from Spotify, then a significant proportion of its revenues would stop, immediately. Its shareholders would almost certainly sue. There is no scenario in which pulling your music from Spotify ends well for a publicly listed record company that relies on Spotify for a significant part of its revenues. 

So the future probably isn’t about war. It’s about deals. And the question then becomes not what the majors can negotiate into their agreements, but what they can get excluded. 

If Spotify starts using generative AI to fill workout playlist or ambient listening - or whatever other category of consumption AI lends itself to - and whether that’s through its own tools or a third-party model, Universal might not try to stop it. Instead, it might negotiate that AI-generated consumption doesn’t count against the royalty pool for its catalogue.

The important thing here is that this would be about Universal’s catalogue, not anyone else’s. Protect itself and let everyone fight their own battles. Different rules for different rightsholders. Different consumption share calculations depending on who has the leverage to carve out exemptions. 

The most favoured nation clauses that have historically kept major label deals roughly symmetrical could come under enormous strain - or be quietly renegotiated away. And, of course, because these deals are so tightly locked under NDAs, the wider industry - even governments - may never know the new calculus behind the consumption exclusions.

For Spotify’s corporate strategy and legal teams, the current uncertainty almost certainly looks like opportunity - a golden window, while AI licensing is still the wild west and no industry-wide framework exists, to push through terms that entrench its position.

"No rightsholder is against our vision. We pretty much have the whole industry lined up behind us."

The longer the music industry delays, the more likely it is that Spotify establishes the framework for how AI generated music sits in the ecosystem. And unlike streaming - where the basic unit of a ‘stream’ gave everyone something to negotiate around - AI licensing has no equivalent. 

Training on what inputs, generating what outputs, exploited in what contexts, measured how, paid by whom? The music industry has not even agreed on the questions yet, let alone the answers. Spotify, meanwhile, is building the infrastructure. By the time the industry works out what it wants, Spotify may already have what it needs.

This is the logical endpoint of the dataset Söderström described with such enthusiasm. And Spotify has form here. Its ‘Perfect Fit Content’ programme, exposed in detail by Liz Pelly in her book ‘Mood Machine’, saw the company work with third parties create tracks under pseudonymous artist names - so-called “ghost artists” with no online presence beyond Spotify - specifically to populate mood and activity playlists at a lower royalty cost than licensing music from real artists.

Playlist editors were monitored on how well they embraced “music commissioned to fit a certain playlist/mood with improved margins”. If Spotify was willing to commission and prioritise cheap human-made filler to reduce its royalty bill, the leap to AI-generated content for the same purpose is not a leap at all. It is the next step on a path the company has already been walking for years.

The music industry should be listening very carefully to what Söderström is building and asking where it leads - because the answer, if you follow the technology to its conclusion, is a platform that no longer needs the music industry’s product to fill a significant and growing share of listening hours. 

The language-to-music dataset doesn’t just help Spotify recommend music better. It is the dataset you would need to replace it.

Spotify’s AI engineering revolution: as costs plunge, guess where the savings go?

Spotify’s R&D expenses have been falling sharply - down to €290 million in Q4, a 23% decline year-on-year - and Söderström's remarks explain where the savings come from. 

Software companies, he said, will “start producing enormously more amount of software”. Spotify’s limiting factor is no longer engineering capacity. “Our limiting factor is actually the amount of change that consumers are comfortable with”. 

When your best engineers stop writing code and start supervising AI-generated code, the cost of building software collapses. That is what is happening at Spotify right now. AI is making everything cheaper to build.

For the music industry, the implication is not simply that Spotify gets more profitable - any well-run company should. The implication is where the surplus goes. The margin expansion story from the financial results - gross margin up 80 basis points, operating income up more than 50% for the full year, free cash flow of €2.9 billion - connects directly to the AI productivity story. The savings from AI-driven engineering go to Spotify's bottom line, not to the royalty pool. There is no mechanism by which they would. 

"Today, what we built is a technology platform for audio, and for all ways creators connect with audiences."

But it’s worse than that. Those savings aren’t sitting idle. They’re funding the R&D, the acquisitions, and the infrastructure that could, over time, reduce Spotify’s dependence on the very catalogue the music industry is licensing to it. The music industry is, in effect, funding its own disruption - through the content that trains Spotify’s models, through the listening data that builds its datasets, and now through the margin expansion that gives Spotify the resources to act on what it's learned.

Alex Norström's vibe check

Greenfield’s second question was the sharpest strategic challenge on the call: the bear thesis that Suno, Udio, Klay and Stability could become DSPs that take share from Spotify. Is Spotify playing to win in AI or taking a cautious approach while upstarts move faster?

Norström, the co-CEO responsible for Spotify's commercial side, answered, “I spend a lot of time with the industry, the music industry, and with artists. And there isn’t any doubt that everyone is optimistic about the future”.

Everyone is optimistic about the future. This is a music industry that is paralysed by the threats presented by AI - to songwriters’ livelihoods, to the value of recorded music, to the entire economic model that sustains it. To describe that industry as uniformly optimistic is either delusional or tone-deaf. Or it tells you something about who Norström is spending his time with.

He continued, “No rightsholder is against our vision. We pretty much have the whole industry lined up behind us”.

This is presumably the same industry that was “lined up behind” Spotify when it demonetised huge swathes of independent creators with the 1000-stream threshold. Or perhaps the same industry that was “lined up behind” Spotify - possibly with a knife in its hands - when Spotify went to war with music publishers over the audiobooks bundle. No rightsholder is against the vision? Perhaps. Or perhaps no rightsholder has yet seen Spotify’s full hand.

The competitive question - whether AI-native platforms could bypass Spotify entirely as both creation tools and distribution platforms - went unanswered. What Greenfield got was an industry sentiment report, not a strategic response. The bear thesis was met with a vibe check.

It is worth noting what Norström did not say. He did not say that Suno and Udio lack the catalogue, the user base or the business model to compete as DSPs. He did not say that the economics don’t work. He did not say that Spotify has a specific technical or strategic advantage over these platforms. He said everyone is optimistic. That is not a strategy.

This was the moment where Norström was supposed to be the counterweight - the commercial chops to Söderström's technical fervour, the north to his south, the co-CEO whose job is to keep the music industry relationships intact. Instead, what the call revealed is that Gustav has Claude, and Alex has vibes and a mood board.

The Chinese sign for nonsense

When Cahall's question about what the market was missing reached Söderström, after Luiga’s measured response, the technology co-CEO added, “Alex here told me that the Chinese sign for macro wind is opportunity. So we’re going to try to capture that opportunity”.

The only problem is the ‘Chinese sign’ for ‘macro wind’ is nothing of the sort.

The aphorism Söderström is reaching for is the debunked claim that the Chinese word for “crisis” contains the character for “opportunity” - a misunderstanding popularised by John F Kennedy in speeches from even before his Presidential run, and repeated ad nauseam by bunkum business consultants ever since. But Söderstöm has swapped “crisis” for “macro wind”, which is his own term from earlier in the call for the AI tailwind Spotify is riding. So he’s not even mangling the right word.

It is worth stepping back and appreciating the density of what Spotify’s new co-CEO structure produced in its first few weeks. Norström, the commercial counterweight, the man whose job is to keep the music industry onside, told analysts that “everyone is optimistic about the future” -  about an industry paralysed by existential fear. 

Söderström, the technology lead, closed an earnings call by attributing a fake Chinese aphorism to his co-CEO and using it to confirm his own thesis, after spending an embarrassing amount of time eulogising the transformative power of AI. These are not legacy executives winding down. These are the new guys. They are weeks into the job. This is the A-team.

Serious structural questions about competitive threats, about the economic position of creators, about who benefits from the AI transition, were consistently met with aphorisms, historical analogies and corporate enthusiasm. The internet analogy. The macro-change-as-opportunity framework. 

Anyone in the music industry genuinely concerned about AI’s impact on creators may find the register troubling. This is not a company that is wrestling with the implications of AI. It’s a company that has decided the implications look great, and is moving on.

Music is disappearing from how Spotify talks about itself

The AI discussion on this call does not exist in isolation. It sits alongside a product roadmap that tells you everything about where Spotify's attention has gone - and where it hasn’t.

Daniel Ek, in his farewell remarks as CEO, said, “Today, what we built is a technology platform for audio, and for all ways creators connect with audiences”. The company's original self-description  - from its ‘about’ page in 2009, still visible on the Wayback Machine -  read, “We respect creativity and believe in fairly compensating artists for their work”. The tagline under the logo was “Everyone Loves Music”. 

That language is gone. In Ek’s farewell, music appears - but only as one item in a list alongside podcasts, books, video, live and “things we haven’t even built yet”. The sentence that defines what the company has become - “a technology platform for audio, and for all ways creators connect with audiences” - does not mention music at all.

The product announcements that Spotify is excited about: physical books via Bookshop.org; Page Match, the feature that syncs your physical book to its audiobook using your phone camera; Audiobook Recaps; 530,000 video podcast shows. 

Söderström, the man whose true love is technology and whose religion is science, summed up the ambition. “We want to do something different. We want to build something that never existed before”. But it looks very much like those things are focused around almost anything but music. 

Spotify could have launched a superfan play for artists and labels - but instead, it announced the most over-engineered bookmark in history.

The superfan has vanished. Universal boss Lucian Grainge spent two years evangelising superfan monetisation as the defining opportunity of ‘Streaming 2.0’. It was the centrepiece of UMG's Capital Markets Day at Abbey Road in September 2024. For a time Ek backed up Grainge’s mantra by continuing to promise a new super premium tier with undefined superfan benefits as part of the package. 

Spotify appears to have stopped listening. The word “superfan” does not appear in the Q4 transcript. Neither does “super premium”. Neither does any product that would directly create new revenue streams for artists or labels.

What does appear is a company that calls itself “the R&D department for the music industry” - Söderström's words on the call, echoing Ek’s farewell reference to being “the R&D arm”. 

But the context is that the R&D is going into AI coding tools, books and video rather than new music products. If you’re the R&D department for the music industry, the music industry might reasonably ask what you’re building for them.

Perhaps the answer is derivatives. It is possible that Spotify sees AI-generated covers, remixes and reinterpretations as its version of the superfan play - a way to let fans interact with the music they love, generating new revenue from existing catalogue. 

If so, it is worth asking whether that is what superfans actually want. The promise of superfan monetisation, as Grainge pitched it, was about deepening the connection between artists and their most devoted listeners - exclusive content, early access, direct relationships. What Spotify appears to be suggesting instead is an AI-generated simulacrum: not the artist, but a machine-made approximation of the artist’s work, surfaced by an algorithm, monetised by the platform. 

It is the uncanny valley of fandom. At some point, someone is going to have to ask where this ends. AI-generated covers today. AI-generated artist personas tomorrow. Grok-powered chatbots where you can have sexy chats with a synthetic version of your favourite star? 

The technology makes all of it possible. The question is whether anyone at Spotify is asking whether any of it is desirable - or whether they're too busy building it to stop and think.

The message to the music industry is clear

Spotify says more content is good, personalisation is the moat, we have the data and the business model, move fast. But for the music industry, every part of that narrative is about how Spotify can transfer more value away from creators to its own platform.

More content uploaded - human or AI-generated - increases competition for listener attention within the same royalty pool while increasing the value of Spotify’s sorting and recommendation layer. 

The language-to-music dataset being built with hundreds of millions of users’ behaviour creates infrastructure that works equally well for retrieval and generation. The engineering cost collapse flows to Spotify’s margins, not to rightsholders. The aggregation thesis predicts that lower friction in content creation consolidates power at the distribution layer, not the creation layer.

The music funds the model. The model sorts the music. Spotify captures the margin. The songwriter’s leverage diminishes with every track uploaded - whether human-made or AI-generated.

A lot of people in music are concerned about the impacts of AI. The impacts on their livelihood for sure - but also the impacts on what creativity actually means, and the impact on culture, and cultural diversity. Söderström told analysts that he thinks it is “important to know that while many people are scared in times of change, this is when there is the most opportunity”.

This is the man who controls Spotify’s technology and product. The man who ditched his family at Christmas to code with a chatbot. Whose home barely ever works because everything is in perpetual beta. Who attributes fake Chinese aphorisms to his co-CEO on earnings calls. Whose best engineers haven’t written a line of code in two months and he is thrilled about it.

When a company’s engineering team no longer writes code - when it has decided that human-written code is less efficient than machine-generated code - why would it think any differently about human-written music? In Söderström's own framing, more content means more opportunity, and AI-created content is just more opportunity at a fraction of the cost.

After his family, technology is his true love. He said so himself. The music industry should take him at his word.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to CMU | the music business explained.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.
Privacy Policy