AI, NIL, and Zero Trust Authenticity – Stratechery by Ben Thompson

Drake and The Weeknd collaborated on a new song and I am told it is a banger:

This video blew up on social media the day after Drake declared in a soon-deleted Instagram Story — written in response to another AI-generated song — that “This is the final straw AI”:

This is, needless to say, not the final straw, nor should Drake want it to be: he may be one of the biggest winners of the AI revolution.

The Music Value Question

The video above is both more and less amazing than it seems: the AI component is the conversion of someone’s voice to sound like Drake and The Weeknd, respectively; the music was made by a human. This isn’t pure AI generation, although services like Uberduck are working on that. That, though, is the amazing part: whoever made this video was talented enough to be able to basically create a Drake song but for the particularly sound of their voice, which happens to be exactly what current AI technology is capable of recreating.

This raises an interesting question as to where the value is coming from. We know there is no value in music simply for existing: like any piece of digital media the song is nothing more than a collection of bits, endlessly copied at zero marginal cost. This was the lesson of the shift from CDs to mp3s: it turned out record labels were not selling music, but rather plastic discs, and when the need for plastic discs went away, so did their business model.

What saved the music industry was working with the Internet instead of against it: if it was effectively free to distribute music, then why not distribute all of it, and charge a monthly price for convenience? Of course it took Daniel Ek and Spotify to drag the music industry to an Internet-first business model, but all’s well that ends well, at least as far as record labels are concerned:

U.S. music revenue

Still, artists continue to grumble about Spotify, in part because per-stream payments seem low; that metric, though, is first and foremost a function of how much music is listened to: the revenue pot is set by the number of subscribers (and music-related ad revenue), which means that the per-stream payout is a function of how many streams there are. In other words, a lower per-stream number is a good thing, because that means there was more listening overall, which is a positive as far as factors like churn or non-streaming revenue generation opportunities are concerned.

Of course the other factor driving artist earnings is competition: music streaming is a zero sum game — when you’re listening to one song, you can’t listen to another — which is precisely why Drake can be so successful churning out so many albums that, to this old man, seem to mostly sound the same. Not only do listeners have access to nearly all recorded music, but the barrier to entry for new music is basically non-existent, which means Spotify’s library is rapidly increasing in size; in this world of overwhelming content it’s easy to default to music from an artist you already know and have some affinity for.

This, then, answers the question of value: as talented as the maker of this song might be, the value is, without question, Drake’s voice, not for its intrinsic musical value, but because it’s Drake.

NIL: Name, Image, and Likeness

In 1995, Ed O’Bannon led the UCLA Bruins to a national basketball championship; a decade later O’Bannon was the lead plaintiff in O’Bannon v. NCAA after his image was used on the cover of NCAA Basketball 09, a video game from EA Sports. O’Bannon alleged that the NCAA’s restrictions on a collegiate athlete’s ability to monetize their name, image, and likeness (NIL) was a violation of his publicity rights and an illegal restraint of trade under the Sherman Antitrust Act. O’Bannon and his fellow athletes won that case and a number of cases that followed, culminating in a unanimous Supreme Court decision in National Collegiate Athletic Association v. Alston that by-and-large affirmed the underlying arguments in O’Bannon’s case.

Most of the specifics in O’Bannon’s case have to do with the peculiarities of the American collegiate sports system, which is right now in a weird state of flux as athletic programs figure out how to navigate a world in which athletes have the right to benefit from NIL: ideally this means something like local endorsements, although it’s easy to see how NIL becomes a shortcut for effectively paying athletes to attend a particular university. The relative morality of that question is beyond the purview of a blog about sports and technology, other than to observe the value in terms of college athletics is mostly about athletic performance; NIL is a way to pay for that value without being explicit about it.

Notice both the similarities and differences between Drake and O’Bannon: O’Bannon was doing something that was unique and valuable (playing basketball), and seeking compensation for that activity, which he — and now many more college athletes — received in the form of compensation for his name, image, and likeness. For Drake, though, it is precisely his name, image, and likeness that lends value to what he does, or at least in the case of this video, could realistically be assumed to have done.

I am of course overstating things: just as a popular college athlete could absolutely provide value as an endorser, certainly Drake or any other artists’ music has value in its own right. The relative contribution of value, though, continues to tip away from a record itself towards the recording artists NIL — which is precisely why Drake could be such a big winner from AI: imagine a future where Drake licenses his voice, and gets royalties or the rights to songs from anyone who uses it.

This isn’t as far-fetched as it might seem; Drake has openly admitted that at this stage in his career songwriting is a collective process. From an interview in Fader after Meek Mill accused the star of not writing his own lyrics:

The fact that most of Drake’s fans seemed not to care about the particulars of how his songs were made proved something important: that Drake was no longer just operating as a popular rapper, but as a pop star, full stop, in a category with Beyoncé, Kanye West, Taylor Swift, and the many boundary-pushing mainstream acts from the past that transcended their genres and reached positions of historic influence in culture. At that altitude, it’s well known that the vast majority of great songs are cooked in groups and workshopped before being brought to life by one singular talent. That is the altitude where Drake lives now.

“I need, sometimes, individuals to spark an idea so that I can take off running,” he says. “I don’t mind that. And those recordings—they are what they are. And you can use your own judgment on what they mean to you.” “There’s not necessarily a context to them,” he adds, when I ask him to provide some. “And I don’t know if I’m really here to even clarify it for you.” Instead, he tells me he is ready and willing to be the flashpoint for a debate about originality in hip-hop. “If I have to be the vessel for this conversation to be brought up—you know, God forbid we start talking about writing and references and who takes what from where—I’m OK with it being me,” he says.

He then makes a bigger point—one that sums up why the experience of being publicly targeted left him in a position of greater strength than he went into it with: “It’s just, music at times can be a collaborative process, you know? Who came up with this, who came up with that—for me, it’s like, I know that it takes me to execute every single thing that I’ve done up until this point. And I’m not ashamed.”

Other stars are even less involved in the songwriting process, relying on songwriting camps to put an album together; from Vulture:

The camps, or at least the collaborative songwriting process, have fundamentally changed the way pop music sounds — Beyoncé’s Lemonade was a strikingly personal album, full of scorned-lover songs, but it was conceived by teams of writers (with the singer’s input and oversight). Key moments came from indie rockers, including Father John Misty, who fleshed out “Hold Up” after Beyoncé sent him the hook. Similarly, West’s Ye deals with mental illness and other intimate themes, but numerous writers, from Benny Blanco to Ty Dolla $ign, helped him turn those issues into songs. (Father John Misty, Parker, Vernon, Koenig, and other indie-rock stars refused interview requests.)

“Those artists still have a heavy hand in what songs they pick,” says Ingrid Andress, a Nashville singer-songwriter who is readying new solo material and regularly attends camps for pop stars. “But people forget that not just Beyoncé feels like Beyoncé. I guarantee all the people who wrote for Beyoncé’s record are coming from a place of also being cheated on, or angry, or wanting to find redemption in their culture.”

The value of a Beyoncé song comes first and foremost from the fact it is a Beyoncé song, not a Father John Misty song; there’s no reason the principle wouldn’t extend to AI: the more abundance there is, the more value accrues to whatever it is that can break through — and superstars can break through more than anyone.

Musical Chimeras

The record labels, unsurprisingly, learned nothing from the evolution of digital music: their first instinct is to reach for the ban hammer. Today that means leaning on centralized services like Spotify; from the Financial Times:

Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right”, said a person familiar with the matter.

The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to online platforms in March, in emails viewed by the FT. “This next generation of technology poses significant issues,” said a person close to the situation. “Much of [generative AI] is trained on popular music. You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.” 

In case it’s not clear, I’m not exactly an expert on pop music, but I’m going to put my money on that Swift-Mars-Styles song being a dud on Spotify, no matter how good it may end up sounding, for one very obvious reason: Spotify will not allow the song to be labeled as being written by Taylor Swift, sung by Bruno Mars, with a Harry Styles theme, because it’s not. Sure, someone may be able to create such a chimera, but it will, like the video above, be a novelty item, the interest in which will decrease as similar chimeras flood the Internet. To put it a different way, as AI-generated content proliferates, authenticity will matter all the more, both commercially (because the AI-generated content won’t be commercializable) and in terms of popular valence.

This is a good thing, because it points to a solution that is aligned with the reality of AI, just as streaming was aligned with the Internet: call it Zero Trust Authenticity.

Zero Trust Authenticy

Back in 2020, when COVID emerged, I wrote an Article entitled Zero Trust Information that built on the ideas behind zero trust networking.

The problem, though, was the Internet: connecting any one computer on the local area network to the Internet effectively connected all of the computers and servers on the local area network to the Internet. The solution was perimeter-based security, aka the “castle-and-moat” approach: enterprises would set up firewalls that prevented outside access to internal networks. The implication was binary: if you were on the internal network, you were trusted, and if you were outside, you were not.

A drawing of Castle and Moat Network Security

This, though, presented two problems: first, if any intruder made it past the firewall, they would have full access to the entire network. Second, if any employee were not physically at work, they were blocked from the network. The solution to the second problem was a virtual private network, which utilized encryption to let a remote employee’s computer operate as if it were physically on the corporate network, but the larger point is the fundamental contradiction represented by these two problems: enabling outside access while trying to keep outsiders out.

These problems were dramatically exacerbated by the three great trends of the last decade: smartphones, software-as-a-service, and cloud computing. Now instead of the occasional salesperson or traveling executive who needed to connect their laptop to the corporate network, every single employee had a portable device that was connected to the Internet all of the time; now, instead of accessing applications hosted on an internal network, employees wanted to access applications operated by a SaaS provider; now, instead of corporate resources being on-premises, they were in public clouds run by AWS or Microsoft. What kind of moat could possibly contain all of these use cases?

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

A drawing of Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

  • If there is no internal network, there is no longer the concept of an outside intruder, or remote worker
  • Individual-based authentication scales on the user side across devices and on the application side across on-premises resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

The point of that Article was to argue that trying to censor misinformation and disinformation was to fruitlessly pursue a castle-and-moat strategy that was not only doomed to fail, but which would actually make the problem worse: the question of “Who decides?” looms over every issue subject to top-down control, and the risk of inevitably getting fraught questions wrong is to empower bad actors who have some morsels of truth that were wrongly censored mixed in with more malicious falsehoods.

A better solution is Zero Trust Information: as I documented in that Article young people are by-and-large appropriately skeptical of what they read online; what they need are trusted resources that do their best to get things right and, critically, take accountability and explain themselves when they change their mind. That is the only way to harvest the massive benefits of the “information superhighway” that is the Internet while avoiding roads to nowhere, or worse.

A similar principle is the way forward for content as well: one can make the case that most of the Internet, given the zero marginal cost of distribution, ought already be considered fake; once content creation itself is a zero marginal cost activity almost all of it will be. The solution isn’t to try to eliminate that content, but rather to find ways to verify that which is still authentic. As I noted above I expect Spotify to do just that with regards to music: now the value of the service won’t simply be convenience, but also the knowledge that if a song on Spotify is labeled “Drake” it will in fact be by Drake (or licensed by him!).

This will present a challenge for sites like YouTube that are further towards the user-generated content end of the spectrum: right now you can upload a video that says whatever you want in its title and description; YouTube could screen for trademarked names and block known rip-offs, but that’s going to be hard to scale as celebrities-within-their-ever-smaller-niches proliferate, and it’s going to have a lot of false positives. What seems better is leaning heavily into verified profiles, artist names, etc.: it should be clear at a glance if a video is authentic or not, because only the authentic person or their representatives could have put it there.

What YouTube does deserve credit for is how it ultimately solved the licensed music problem: it used to be that user-generated content that included licensed music was subject to takedown notices, and eventually YouTube would unilaterally remove them once it gained the ability to scan everything for known music signatures. Today, though, the videos can stay on YouTube: any monetization of said videos, though, goes to the record labels. This is very much in-line with what I am proposing: in this case the authenticity is the music itself, which YouTube ascertains and compensates accordingly, while in the future the authenticity will be name, image, and likeness or artists and creators.

What is compelling about this model of affirmatively asserting authenticity is the room it leaves for innovation and experimentation and, should a similar attribution/licensing regime be worked out, even greater benefits to those with the name, image, and likeness capable of breaking through the noise. What would be far less lucrative — and, for society broadly, far more destructive — is believing that scrambling to stop the free creation of content by AI will somehow go better than the same failed approaches to stopping free distribution on the Internet.

Post Author: BackSpin Chief Editor

We are the editorial staff at BackSpin Records. We love music, technology, and other interesting things!