Today, I’m talking to Nicholas Thompson, the CEO of The Atlantic, one of the oldest magazines in the United States — like really old. It was founded in 1857 and is now owned by Laurene Powell Jobs, whose last name I am certain that Decoder listeners will recognize.
I was really excited to talk to Nick — like so many media CEOs, he just signed a deal allowing OpenAI to use The Atlantic’s vast archives as training data, but he also has a rich background in tech. Before he was the CEO of The Atlantic, Nick was the editor-in-chief of Wired, where he set his sights on AI reporting well before anyone else, including me. So he’s been paying attention to this for a long time.
Now, I feel like I should disclose right away that Vox Media, The Verge’s parent company where I work, also has a deal with OpenAI, which was announced on the same day as The Atlantic’s deal.
I actually don’t know very much about the terms of our deal, since I’m on the editorial side of the house and there’s a strict firewall between the business side and the editorial side. I suspect all of these deals are pretty similar, but I actually asked Nick about that. And there’s a pretty funny reason that he doesn’t know either; you’ll hear us talk about it.
Of course, I also asked Nick why he was willing to sign a deal with OpenAI in the first place, and why now when there’s so much general unhappiness about AI companies using other people’s work without permission, and specific unhappiness with OpenAI. You’ll hear Nick explain that what he really wanted to get back was a sense of control: Control over how much data was being used, how results were being displayed, and, of course, over how much money The Atlantic was being paid.
You’ll hear Nick say this all sounds like OpenAI is gearing up to build a next-generation search product, which of course led us to talking about Google and whether getting Google to pay for AI search is a realistic goal.
I was also really interested in asking Nick about the general sense that the AI companies are getting vastly more than they’re giving with these sorts of deals — yes, they’re paying some money, but I’ve heard from so many of you that the money might now be the point. That there’s something else going on here, that maybe allowing creativity to get commodified this way will come with a price tag so big money can never pay it back.
If there is anyone who could get into it with me on that question, it’s Nick. This one went long, and it’s a good one. Okay, Nick Thompson, CEO of The Atlantic. Here we go.
This transcript has been lightly edited for length and clarity.
Nick Thompson, you are the CEO of The Atlantic. You are also notably, for this conversation, the former editor in chief of Wired. Welcome to Decoder.
Thank you so much, Nilay. I’m delighted to be here.
I am really excited to talk to you. I bring up the Wired thing because I want to talk to you about AI and the deals media companies like The Atlantic, and notably Vox Media, the company that I work for, are making with companies like OpenAI. It feels like you have to understand the media business, the tech business, and where the tech business might be going in relationship to the media. Let’s start at the very beginning, why make a deal like this with OpenAI? What is your deal with OpenAI?
We can go through it in complex way or the simple way. The simple way is we believe it provides revenue, but more importantly provides a potential traffic source. Provides an avenue for a product partnership that could be very beneficial, and that provides a way for us to help shape the future of AI.
AI is coming, it is coming quickly. We want to be part of whatever transition happens. Transition might be bad, the transition might be good, but we believe the odds of it being good for journalism and the kind of work we do with The Atlantic are higher if we participate in it. So we took that approach.
We started talking to all the AI companies, all of the large language model companies. We had parameters that we would accept for a deal, parameters we would not accept for a deal, and we reached a deal with OpenAI. So that’s the basic framework.
The deal really has three parts, four parts, depending on how you look at it. Part one is for a limited period of time, two years in our case, they’re allowed to train on our data. So they can read Atlantic stories and they can incorporate that into their base large language model. We have some controls over the kind of outputs they’re allowed to give to people, but they’re allowed to train on our data for two years.
The second part of the deal is the product partnership. So they give us credits. So we were building tools on the business side with the engineering team that are using OpenAI. So we don’t have to rely on Llama, we are just using OpenAI.
Credits, we are working with them. At some point there may be engineering support, there may not be engineering support. Who knows exactly how that is going to work, but that is a potentially valuable part. And we are launching a lab site soon where we’ll have a whole bunch of experimental tools to help readers.
Mistake number one, and that is the crucial mistake, is making the business deal upstream of the editorial. Mistake number one is assuming that these companies will partner with you forever, and if they say, “We’re going to give you X money this year,” you’ll also have that X money in three years, which is after the contract ends. That is a mistake.
But the much more important mistake is if you start to change the sacred thing you do, which is the creation of stories for the platforms.
So now back to the AI deal. Is there any way in which we will change the way we do our stories because of this deal? Absolutely not, this will have no effect. We will do the exact same stories in 2024 and 2025 than we would’ve if we didn’t have this deal.
One of the big criticisms here is, okay, you sold this stuff for two years, they’re going to train their model, it’s going to get better. Then the deal will end. They won’t pay you again, but they will have already trained the model. And that value will remain forever and then they will just continue doing whatever they want to do.
There are about 20 different terms that are important when you’re negotiating a deal like this. That is one of the important terms. And so it has been publicly stated, and so I can say this, they are destroying our data. They will use our data to train any model that they build in the next two years, the two years after we sign that deal.
They train each new model on entirely new data, and so they will have our data for the next two years, but when it gets to GPT6 they won’t, unless they have another deal. That clause is important both for the reason you said, and also so we have more leverage when there’s another moment of negotiation.
It feels like OpenAI is the challenger. They obviously are the upstart, they’re chaotic in the ways that startups can be chaotic, in a fun way and also in a compromised way.
The real target here, it feels like, is Google, which has had a very extractive relationship with the media for a long time. Now is keeping more of that traffic for itself. Is also building AI search products, delivering AI results, and is paying no one. Do you think a deal like this helps you get leverage against Google?
I think so. Google has a different situation, where they have so much more leverage on us because you can’t block Google. I mean, there are ways you can partially block Google, and you can block this Googlebot, not that Googlebot, but they have a lot more leverage on us that OpenAi does, the negotiations are different.
I also would imagine that they are waiting. There are a lot of things that are happening with OpenAI, including the New York Times lawsuit. I think they’re waiting to see how that shakes out. I haven’t talked to Google directly about this, but if they pay for content, do they have to pay for all the links? And do they have to pay back for 25 years worth of it?
So I don’t know what their calculations are, but I think they’re watching what’s happening. And my hope will be that there’s a fair value exchange with Google as they build AI search.
That part where you said OpenAI has already taken it, they’ve already scraped on what they refer to as publicly available information, which might include all the way up to YouTube, and these are the reports that we’ve heard. Do you feel like you’re taking the payment now in recompense for what they’ve already taken? Or is this for the future?
That’s a hard question to answer. This is not like you committed a sin and you’re paying us for the sin, we don’t view it that way. We view it as, you created… I was trying to do a calculation the other day. I was like, “How much does the high quality journalistic content, how much value did it create for OpenAI?” And you can actually kind of do a back of the envelope calculation, and you can see how much money, based on that calculation, a rough back of the envelope, what they owe the journalism industry or what the journalism industry contributed.
And you can think about of what the journalism industry contributed, what percent should go to us and what percent should they keep, right? And that’s sort of one way where you came up with a number. I don’t view it as paying for a sin. I view it as, “Okay. They’ve built this thing, it has this value. We’re part of it. We’d like to be paid for it.”
That calculation, when you went to open AI with it, did that match what they wanted to pay you? Or were you higher or lower?
That particular calculation has so much variation in it because how much do you weigh each of the factors is roughly where we ended up.
The reason I ask it that way is the notion that this is a pre-settlement for a lawsuit that you might’ve filed the way that the New York Times filed a lawsuit, or you’re setting a price floor for a further negotiation with Google, really changes the way you think about the deal itself, right?
So if you’re saying, “You already took it. Just pay us to catch us up, and then in two years, we’ll start over from scratch,” that changes versus, “You’re building GPT-5 and a search product. We want to be on the ground floor as the challenger to Google.” You might accept a discount in that case because you think the upside is higher. What’s the balance there?
We want to maximize several things, right? We want to maximize the amount of money that comes to serious journalism companies. We want to shape the industry in the best possible direction based on our values, and we think the values that are important. We want to bring in as many readers as we possibly can. And so as we think through the deal, we’re weighing all of those things.
Now, the question of how you maximize money for the Atlanta grading publication is interesting because you do have an option. You can take the New York Times route or the Alden Capital route, and you can sue. We looked at that calculation in the case of Open AI and chose not to sue. That doesn’t mean we’re not going to sue every other large language model company out there.
You weigh what they’re offering on all those fronts. All the benefits they’re offering, again, the product partnerships, search, et cetera. You weigh all those things versus what it would cost to sue and how much you would get from it, and then you make a choice.
It’s been reported that The Times is a million dollars deep into its legal fees against OpenAI. That’s-
Suggesting they expect to get more than $1 million for the content.
They assume they’ll get more than $1 million. The Atlantic is owned by billionaires, it’s owned by Laurene Powell Jobs. Would she have fronted $1 million in legal fees, or is that off the table for you?
That’s a complicated question. I mean, the answer of course, yes, right? If we made an argument to her that this is what is best for the future of serious journalism, then she would certainly have supported it.
The reason I ask that question that way is, there’s a lot of risk there, and when you have a rich owner, you can accept maybe more risk than if you are a publicly traded company or you have a bunch of VC money like Vox does. But the risk there is almost impossible to ascertain because the copyright law argument is a total coin flip at this moment in time.
Do you think it’s a coin flip? Or do you think it’s a 60/40, 40/60, 70/30, 30/70?
I think it’s a pure coin flip, actually.
You think it’s 50/50? Former copyright lawyer Patel here.
And that is a pure lawyer answer. And I think you can run through the argument, and on a good day, a judge that has just used Dall-E to make a storybook for their grandchild is on your side; and on a bad day, they’ve just seen the two startups that ripped off Johnny B. Goode, and the RIAA is suing them, and they lose. And I think that is as an emotional decision as almost anything right now.
But do you actually think The Times is going to reach an outcome, or do you think they’re going to settle it? Partly you settle based on where you think the case is going, right? And you do the arguments and you’re like, “Oh my God. It’s now 70/30, so we should settle on different terms.”
Right. I think there’s that, and we haven’t gotten through any of that, and we really haven’t seen anything substantive from OpenAI in terms of how they’ve trained most of these companies. It’s really under lock and key, what they’ve trained on, what their approach to training was, what their approach to copyright law and training was. So sure, maybe as time goes on, that will change.
But just on a straight let’s go through the argument, you ingest a bunch of data, you train a model on it, which means you set some weights and you throw the data out, and I can do this generation. Who knows? If The Times wins, for example, and your two years is up, and it turns out it wasn’t fair use to train these, do you think you’ll be able to get more money? Are you just waiting at the clock on these lawsuits?
Oh. If The Times wins, we will get more money from everybody. Every journalistic organization will get much more money from everybody, right?
If The Times loses-
We’ll all get much less.
I’m just asking, how are you factoring that risk?
Basically, you have a conversation with your lawyer and your lawyers, and I talked to lots of copyright lawyers to decide. If I thought The Times had a 99% chance of winning, I would have a very different perspective going into these negotiations. If I thought The Times had a 1% chance of winning, a different perspective, right? So you make your decisions based on that.
You also weigh other things, right? Will text be important to training large language models two years in the future, or will it all be multimodal data? Will synthetic data be so good? Right? I’ve had people making large language models basically say, “We don’t need you because we can do it all through synthetic data in the future.” And maybe the synthetic data is derivative of the organic data, but you have to weigh what will your data be worth tomorrow?
And therefore, are you getting a better deal now or will you get a better deal tomorrow? Do you think your data is going to be worth more tomorrow because text will still be valuable. And in fact, that organic human-certified data that we create at The Atlantic and have been doing forever, if you think that is going to be more and more valuable and you think The Times is going to win, well then you will be more cautious. You would demand more in the deals. I’m not saying you wouldn’t do any deals, but you just have a different framework.
Do you think that the decision to take the deal now is rooted in, “Well, we can get some revenue now, and hopefully all of these copyright lawsuits,” because there’s a lot of them. The industry really just has to lose one to get to where you’re saying, right? The record labels have to win or The Times has to win, or Sarah Silverman has to win, and then the dominoes start falling in your favor.
But here’s one more factor which I think is interesting. I believe that us doing this deal and the Wall Street Journal doing their deal helps The Times because it shows that there is a market for this stuff.
There’s a criticism like, “Why is there not this collective action?” And the reasons why there isn’t collective action are hard, including antitrust law, which means that I can’t talk to Bankoff and negotiate with him-
Jim Bankoff is the CEO of Vox Media.
Right. So Jim and I can’t talk and negotiate together and get better terms for both of us. There’s another collective action problem where if you join a group, a consortium, the money presumably is spread based on the word contribution, but some people like The Times presumably think that their brand value and their words are more valuable on a per-word basis. At the top of the food chain, they have an incentive not to join a consortium. So you have a whole bunch of reasons why you can’t do collective bargaining together as an industry to get better terms, which would probably be better overall for Medium.
While that is true, one of the ways that we can help the industry is by making deals and setting a market. So that then, I believe, that us doing a deal with OpenAI, makes it easier for us to make deals with the other large language model companies if those come about, I think it makes it easier for other journalistic companies to make deals with OpenAI and others, and I think it makes it more likely that The Times wins their lawsuit.
The fourth factor in the fair use analysis that a court would do is the effect of the new use on the market for the old work. And you’re saying, well, you have to have a market. You have to set some prices for this kind of use.
And we are setting the market.
And you think that that over time will strategically help The Times?
The Times case is going to depend on 1000 things that are more important, but I do think that as a general principle set in a market and getting a fair exchange of value is good precedent for our industry.
There’s another layer of implications to taking this kind of deal, and it comes from the people who are making all of the content, who are making the work, who are writing the stories and making all the podcasts. And the thing that really strikes me about it is that The Atlantic’s union is mad. The Vox Media Union, which the Verge team that I manage is in, is mad. The union for New York Magazine, another Vox Media imprint, is mad. They’ve all written letters and circulated statements saying they’re outraged about this, and I’ve been thinking a lot about that outrage and what it means.
No one seems mad when a media organization licenses their content at Apple News or we publish on YouTube, even if the terms from YouTube or any of these other platforms are worse or feel even more exploitative. And I’ve been trying to pull this apart, and what I’ve kind of landed on is the copyright part of this is just an economic argument. You took our stuff, you didn’t pay for it, now you got to pay for it. You want to use it in some new way? We’ll come to some agreement on some parameters, and you’ll pay for it.
And the money on the economic side does not cure the moral problem that people see, which is partially a labor issue, this technology might displace all of us on some timeline, and partially just the, “Hey, you just took this stuff.” And now the CTO, Mira Murati, is running around saying, “Maybe some creative jobs shouldn’t exist,” right? There’s a blitheness to this industry, particularly from OpenAI.
And that disconnect between the economic problem that copyright law might help you solve or The Times case might help you extract more money from, and the moral dilemma, seems like it’s wider than ever.
Oh, I totally agree. I wrote a book on the history of the Cold War that was published in 2009, and when I learned that that was in the training set of Llama, sort of the emotional, “Wait. So the book was pirated?” And not only that. It was chopped up into the wrong order. It was like this violation, right?
And so I think there’s at least two things that are super important here. There’s one, that feeling, like, “Wait a second, they just took this. They didn’t pay for it.” And then secondly, there’s this fear, which is AI could do terrible things to our industry. Absolutely. So you have those two very emotional factors coming together, and this is a deal with an AI company.
So my view or my role as CEO is to try to put that aside and to say, “What I’m trying to optimize for is the future health of The Atlantic, the future economics of The Atlantic, the future of this industry. I’m weighing all these different factors together, and I think the deal, net-net is very good for us in all these ways.
AI is this rainstorm, or it’s this hurricane, and it’s coming towards our industry, right? It’s tempting to just go out and be like, “Oh my God, there’s a hurricane that’s coming,” and I’m angry about that. But what you really want to do is, it’s a rainstorm, you want to put on a raincoat and put on an umbrella. If you’re a farmer, you want to figure out what new crops to plant. You want to prepare and deal with it.
And so my job is to try to separate the fear of what might happen and work as hard as I can for the best possible outcome, knowing that because I have done a deal with an AI company, people will be angry because AI could be a very bad thing, and so there’s this association. But regardless, I have to try to do what is best for The Atlantic and for the industry.
That was the CEO answer. There’s a reason I introduced you as the former editor-in-chief of Wired, because I want that answer too, which is you ran an industry-leading publication during the social media era.
A lot of what I’ve heard from people who wish to regulate AI or slow it down or anything is we failed to learn anything from the social media era. We failed to learn how to regulate these companies, we failed to learn how to hold them in check. We all certainly failed to learn how to get paid for how much they use our content. Facebook made a bunch of money distributing our content and media companies made none. YouTube, I think, still doesn’t pay high enough rates to support a news organization on YouTube, and it’s just a moral failure on YouTube’s part.
From that perspective, as you watch the social media era unfold, what mistakes from that era are you trying to avoid making? Because the idea that the tech companies are just the weather is very tempting. They’re just going to do this and we can’t stop. The social media is just going to happen to us.
And it did, but I think a lot of people are looking back on that and saying, “Boy, did we just make a bunch of assumptions about their motivations or how people would communicate using these tools.” It turned out to be utterly wrong, and we should have actually stopped it earlier or changed it earlier.
Answering as a CEO, that is what we are trying to do. We are trying to figure out a way that these tools evolve in such a way that they are best source. Maybe it’s just the weather is the wrong example because we do have some control in the very early stages in making these things better. Just like if there had been a way early in Facebook to shift the way that News Feed work, so that established brands weren’t given the same weight as non-established brands. There were like 20 fundamental sins at the beginning of the News Feed, which ended up being hugely damaging to both journalism and American democracy.
But one of the tweaks would’ve been, can you change the weight in the way the design and the way fonts work or whatever so that somebody in Macedonia can’t start a publication called The Verge with another Z at the end that looks just like you and has the exact same weight? I think that one of the lessons is to pay a lot of attention. So the AI search products have not been built and have not been launched. As they’re built and as they’re launched, what are the values we want embedded in them? How much text do we want them to show? How do we want the external links to work? How do we want the level of summarization? Those are really crucial questions to get right at the beginning, and I think we are more likely to get them right as they do these kind of deals.
The other thing I’ll say though, as the former editor of The Wire, like, “Oh my God.” Some days I wake up, I’m like, “I wish I was a reporter again.” It is so amazing the stories that… I mean, you guys are telling a lot of them, but the opportunity to report because it is total madness right now. It’s like the best story to report on in years. It’s incredible. And so I can’t do any of that because I’m a businessman now and I don’t even talk to the editors. I don’t even know what we’re going to run in The Atlantic today, but I would love… I spent a lot of my time writing those stories on Facebook back when I was there and at Wire, I loved that. I love writing on these crazy people in this world of churn making these massive decisions. It’s so much fun.
When you say it’s all crazy out there. The thing that really strikes me is I would say even two years ago, people thought the internet sort of calcified into a series of platforms and this is what it’s going to look like. And then Elon bought Twitter and then ChatGPT showed up, and now it feels like everything’s breaking apart. And the thing that feels mostly like it’s breaking apart to me is the assumption that the big platforms have our best interests in heart or can be trusted or trusted with our children. You see the spate of legislation that’s out there that would regulate how kids use platforms. You see all the reporting that is out there about Facebook willfully ignoring some of the problems it causes with teenagers.
The other side of it is a lot of the underlying assumptions about the value that is being exchanged, are kind of like Google’s assumptions. Google does image search, they get sued, they win because they’re a bunch of kids. Google indexes all of our sites, but they send us traffic and we sort of agreed with that approach for a long time. They keep winning because they are innocents or they at least hold themselves out to be innocents and they deliver a lot of value in a new way. That part feels like it has definitely changed to me. This assumption that it’s just a bunch of kids trying to change the world, and of course we should let them skate by and ask for forgiveness, not permission. Do you think from the business point of view, that that is actually going to create opportunities to bring value back to the people who make the work because that’s the real problem here?
I don’t think that’s changed. I think that changed in 2016, or that changed in late 2016, early 2017, and then by Cambridge Analytica, which was 2018, I think that’s when… I mean, that is all changing now. You are very right that is changing, but I think the trajectory-
The specific similarity that I’m drawing is not, I can’t trust them because of Cambridge Analytica. I’m pointing right at Perplexity is scraping a bunch of paywalled websites and showing the results, or OpenAI trained on YouTube to make Sora, or Suno, the company the RIAA just sued, is making music… And the underlying piece of it is, “Well, it’s just out. It just ours to take, and we’ll pay some money to cure it at the end, and that’s just cost of doing business.”
So this is so at stake and it’s at stake today, this very moment while we’re doing this, and it’s at stake in the case of Perplexity, I think. So Google got away with stuff because “Hey, we’re cool kids and we’re wearing five-fingered lizard shoes to meetings with Senators.” And it’s all cool, and they get away for a while, and then eventually regulations catch up. They have to balance. It’s complicated. The dynamic changes. Facebook, the dynamic changes after the election of Trump, and then even more so with Cambridge Analytica.
Uber comes along and has a totally different strategy, which is, “We’re going to get away by just ignoring everything and then making so much money that we’re huge and then we’ll follow along.” Which is a very different approach. I think that Perplexity is trying to decide, “Are we going to be Uber?” And we’re just going to ignore Robots.txt? You read all these stories. “Or are we going to try to do kind of the Google thing and just be like, ‘We’re an AI company. We’re interested in we’re going to get big and see what happens’ or are we going to change and cooperate with the publishers?” And I think that is at stake right now.
And my sense is that there are probably ways that we, as an industry, can push Perplexity into that third path that I’m talking about, where they are a responsible player that doesn’t do 900 word summaries of a 901 word story. And that actually does sort of a fair use summary and a proper link out. Will that happen? If that happens, that is so much better for us than if it does not. And so what is the role that I can play in making that happen? And what is the role that you can play in making that happen? That is very important for the future of media.
And I think it’s particularly important because I think the biggest thing happening to media right now or the most… And you talked about this in the amazing conversation with Ezra Klein and you guys talked about the enshittification of the web, that is the thing that is most at stake right now. AI content right now is bad. What if AI content becomes good? What if the web it becomes sort of indistinguishable and you can’t find yourself around? How do you navigate through that? And building search engines that are still able to direct you to legitimate real content, not the billions of spin-offs, that is one of the most existential problems that exist. And if that problem is not solved, we’re in a world of hurt. So that’s the thing that is happening right now that I am most worried, intrigued, interested in for the next couple of years.
Because Google’s entire business model depends on probably the open web. I mean, the thing you’re talking about breaking is Google search broadly. If the web becomes so enshittified that Google cannot sort the wheat from the chaff, that version of the web comes to an end, and maybe we’ve all paid enough attention to Perplexity and they have a deal or OpenAI search product has better sources from The Atlantic and whoever else, and that will become the winner because people will seek out quality. It’s a big bet, but it kind of relies on the web becoming so polluted that Google can’t sort it out.
Source: https://www.theverge.com/2024/7/11/24196396/the-atlantic-openai-licensing-deal-ai-news-journalism-web-future-decoder-podcasts