Is Journalism Doomed in the Age of AI? A Conversation with Julian Sanchez
AI is the latest symptom of the challenges facing journalism although it poses special dangers of its own
Subscribe to ReImagining Liberty in your favorite podcast app: Apple Podcasts | Spotify | Google Podcasts | YouTube | RSS
Aaron Ross Powell: Welcome to ReImagining Liberty, a show about the emancipatory and cosmopolitan case for radical social, political, and economic freedom. I'm Aaron Ross Powell. Both the short and long-term impact of AI technologies is unknown, but it's almost certain to be significant. It will destroy some industries, accelerate others, and revolutionize still more, and it seems no one has a lukewarm opinion about AI. You're either excited about its prospects or convinced it's nothing more than intellectual property theft or the inevitable end of the market for human creativity.
Worries are particularly acute about what this means for journalism, and those worries are worth taking seriously given the importance of quality journalism to a free society and a functioning democracy. My guest today, writer Julian Sanchez, has worked as a journalist and policy analyst and thought quite a lot about these issues. He joins me for a conversation about AI, the state of content creation, and the future of journalism as a profession.
The following transcript has been lightly edited for flow and clarity.
Aaron Ross Powell: Recently, I saw a poll someone had made on the social media site Threads asking the question, “Do you think that artificial intelligence and the related technologies will be a net benefit or net harmful to society just in general?” Maybe we start there. What do you think?
Julian Sanchez: I mean, I think potentially a net benefit, but I think, you know, as with a lot of the technologies, as with arguably with print, there is probably a long lead time where it's going to be on net more disruptive than helpful. I think you can make the case that in its first century or two, print was a pretty big problem on net. I think you can tie the early modern witch panics pretty tightly to prints to books like the Malleus Maleficarum and the spread of the viral meme of the idea of witchcraft. Obviously, and depending on your theology, maybe this is good or bad, but we saw centuries of religious warfare that are intimately connected to the new capability to have widely dispersed scripture and more diverse interpretations of scripture. And I think analogously, we will eventually probably find ways in which AI is going to be massively beneficial, but that's going to be disruptive in a lot of ways.
First, because I think a lot of people are going to find malign applications for it that are more easy to rapidly deploy than the benign ones. And also, just because I think it's going to take a long time to figure out what the role of human beings is when a lot of cognitive work can be done by automated systems that currently is sort of aspirational work for humans. The dream of automation for a long time was, once the machines can do the drudge work and the manual labor, humans will at last be free to write poetry and novels and make paintings and create sculptures. And there's this plausible dystopian future where now that the computers can take care of making the paintings and writing the novels, humans are free to clean the sewers.
Aaron Ross Powell: That seems like it has an element of snobbishness in its concern. And this is something that I notice in the critiques of this technology, which is that nobody complained about automation…Most of us are not buying artisanal bread, right? The bread that we buy was made in a factory at scale. It's using techniques that bakers developed over the centuries or millennia, but it's machines just cranking out another loaf of bread that looks exactly the same and are not inventing new ways. There's no heart and soul in the loaves at the grocery store. But no one really complains about that. There isn't this like widespread movement on social media to yell at anyone who mentions that they went grocery shopping.
Julian Sanchez: Right, no, I agree. And look, I'm voicing this critique just to have it out there more than to endorse it. But I think this goes back to the idea of what is the ideal of what humans are going to be doing in the future. For a long time, the answer to folks who complained that the machine looms are going to throw a lot of weavers out of work and farm machinery is going to throw a lot of agricultural workers out of work was, well, in the short term, that may be painful, but in the long run, what that means is manual labor will be done by machines and the jobs that are left to humans are going to be more attractive, are going to be closer to the kind of thing that you would like to spend your time doing.
I think the reason you see this kind of pushback from artists is that it's harder to make that kind of argument. If your aspiration was, well, I want to be a composer and I want to be an artist, then it's not that, well, then there's something else better you can do, even if the machine takes your old job. This is the kind of thing human beings used to imagine they would be free to do with their time when scarcity was reduced. I'm sympathetic to that. And in particular, when you think about the amplification of a kind of winner-take-all. So, in the near term, ChatGPT is not going to give you output on par with you know, N. K. Jemisin or David Foster Wallace or James Joyce. It's going to give you kind of passable copy. And the same thing with art. It's probably not going to do anything terribly innovative. It often will have a library of a certain number of styles and will often give you some fairly confused responses to a prompt. But, you know, the top whatever one percent of artists, writers and composers probably don't have anything to worry about.
Journalism's problems long predate and have to do with factors unrelated to AI. It's essentially about the loss of the ability to leverage a kind of monopoly on the distribution of large amounts of pulped wood to lots of households, and the attention that came with it for an advertising model.
But as it turns out, that's not by definition most of the people in those spheres. Being bad at things is kind of a prerequisite to being good at them. People who are great composers often start out doing less rarefied kinds of writing or composition or playing or painting. And so, I understand people's anxiety that the kind of work that a writer or a painter or a composer might do while they're honing their craft is increasingly going to be—again, unless you get a kind of mass desire for, let's say, the artisanal human-produced version—hard to justify economically. If you're saying, well, look, I don't need a great original work of art. What I need is an illustration for this brochure or a, you know, a jingle for this advertisement. If you have expert systems that can do that essentially for free, and particularly in regions where there isn't a kind of demand on the consumer side for, you know, we want the human version of this. That's unsettling. And again, you know, unsettling in a way it maybe isn't when people hear, you know, gosh, maybe humans having to physically plow the fields isn't going to be a thing anymore. I understand the concern. I understand why people view it differently, even though I think probably what we're what we're going to see and what we're already ending up seeing is something more like a kind of collaborative relationship.
Aaron Ross Powell: I think that last point is right because it feels less like this will replace people outright and more like this will raise the floor for a lot of people's abilities and then, but the human touch allows you to go above and beyond that. So, you don't publish the article that you wrote with ChatGPT, but you use it as something to refine your ideas with or start get you starter prose to then build off of, or help you organize your massive notes into an outline, and so on.
But I guess the thing that's striking to me in a lot of the objections to it is you can imagine analogous situations that again people don't really complain about in the same way. So, they they treat it as…these models and the work that they do and the kinds of thing they create are qualitatively different somehow than what came before, but then ignore the analogous situation. So, the bread baking is one of these. To say there just is far more artistry in being the person who writes poetry than the person who dedicates their career to the craft of baking bread is just to basically express a preference about, I happen to like poetry more than bread, but it would be hard to justify that there's less craft and artistry in one versus the other. But on the like producing stuff that is cheaper, as the market for journalists has declined in part because you had this wave of 20-somethings who are willing to crank out content for content farms at vanishingly small salaries because they'd live 20 of them in a Brooklyn walk-up, and so, and that took jobs away, the content creator job, or just like entry into the marketplace or internationalizing content creation means that there's now people overseas who will make you a logo for much less because the cost of living in Moldova is much lower than it is in Manhattan. But we don't see a similar like, it is wrong for you as a company that wants a logo to hire a designer in Moldova versus the much more expensive person in Manhattan. We don't tend to see a like, we need to stop the kids from entering journalism because they're bringing our salaries down, devaluing our content, and so on.
In the same way that a YouTuber might have a level of Patreon sponsorship where the content is free, but also if you're a sponsor, you get to do a Zoom chat with them, or you get to be on their Discord and have some kind of community…we're seeing other industries are shifting toward something very similar. You have The Atlantic and The New York Times increasingly moving toward the idea that the prestigious publication is going to create a kind of aura of desirability around a group of people.
It seems to be that the objection is, well, this can do it at scale, right? But there are lots of things that can happen at scale. So that's a quantitative difference. And then, part of it seems to be that like, if you, Julian, want a picture for your wall, you have some sort of moral responsibility to hire a human artist to make that picture versus an AI and that if you can't afford a human artist, your moral responsibility is to not have the picture versus to get it at a price that you can afford, which might be free or the $10 a month mid -journey membership or whatever. And so that's, I guess at the broad level, my hang up with a lot of this is it seems like the arguments that are made against this tech, if we took them seriously, would also apply in a lot of domains that they're not typically applied, and that people don't seem to feel the same degree of rage about.
Julian Sanchez: Yeah, I mean the wrinkle here, and I don't know if this is necessarily a very good argument, but of course, the argument people raise is these models are all trained on vast amounts of data. So, this is in some sense uncompensated exploitation of the labor of all the artists whose work is fed into this. Although you could also say, well, you know, there's plenty of work that's now in the public domain. So, you could do quite a bit of training without at least exploiting the labor of any currently living person who has a legally recognized right in that work. But no, I find that compelling.
I think the arguments here are really backwards from anxiety about a world in which it's not economically viable to be an artist or a writer, or at least it's not economically viable for more than some very small number of people to pursue that work. We’ll see again whether that turns out to be the case or whether the kinds of things that it's viable to get hired to do alter somewhat as certain tasks are taken over by AI.
I think this is somewhat analogous to what we see in complaints about big tech related to journalism. I think arguably, it's the case that the sort of nose diving of journalism as an industry is bad for humanity, that functioning democracies need people doing journalistic work and the fact that it's increasingly not viable to underwrite that work is a bad thing. And I think what you see as a result of that is people kind of casting about for an amount of it's viable.
I was at a conference just a few weeks ago sponsored by the Knight Foundation where a writer named Cory Doctorow was on stage talking about how Google and other search engines that make ad revenue off news sites are stealing from newspapers and news sites when they sell ads on search results for that. Now, I think that's pretty hard to defend when you sort of think about it. We don't think they're stealing from Donald Trump whenever people search for Donald Trump. But what is motivating the attempt to find a wrong in need of compensation is the absolutely justifiable reaction that, gosh, if it is no longer economically sustainable for the The New York Times and the Washington Post, or at least some institution doing that kind of work, to support itself and finance the labor that it takes to do good investigative journalism, that bodes very ill for democracy, quite apart from whether it makes a lot of journalists who'd like to earn a salary unhappy. In a sense, the idea that that's unacceptable then drives you to say, well, then there must be a model that we can justify that makes it tenable again. And so, we can convince ourselves that Google search that makes money off news results must be stealing, and so they owe the money that's got to be paid. Again, it's not that the moral argument is in itself credible. It's the sense that something has to be true that enables us to make it viable to do journalism.
Aaron Ross Powell: So why does the argument or the strategy for argumentation against this technology, or its widespread use, or its unencumbered use, flow in that direction? So why is it that artists who are upset about the potential impact this has on their livelihoods begin with arguments about intellectual property, or the nature of creativity or the human touch versus just coming out and saying, look, I work in an industry that I think this is going to demolish. My livelihood depends on that industry persisting, potentially growing and so on, and so therefore this technology is bad because it will hurt my livelihood or therefore this technology is bad because it will lead to a decline in the kind of journalism that is necessary for a democracy to function well. Why hide the ultimate concern versus just leading with the ultimate concern?
Julian Sanchez: I do see some people indeed leading with that concern. So, it's not that nobody is saying that. But also in a sense, this is the old bootlegger-Baptist syndrome. An appeal to your to your interests is always less generally persuasive than an appeal to a principle. So, if you say this will impair my ability to make a living, one might rightly say, well, new technology often shakes up industries in ways that make people have to look for other work or lots of things might make it harder for you to turn a living. And we don't think that's usually a justification for saying we're going to we're going to shut down a technology that that's made that harder if it's seen as I personally am the one affected by this. Whereas if you can make a kind of a broader argument of the undesirability of it, that's more appealing to a lot of people.
AI is kind of the monosodium glutamate for a lot of existing trends, it just makes it possible to ratchet up the speed and the scale.
And again, in the case of journalism at least I think there is a good argument about the general sustainability, as of yet at least. There's a lot of kinds of reporting AI can't do because it doesn't have legs and can't just do a lot of things autonomously. Maybe that will change at some point but for the near future there's lots of kinds of reporting that are difficult for AI systems to do. But we have a model for financing journalism that's based on selling the writing, not selling the labor that goes into it. You have to sell as the output and we have this sort of IP problem of well, we recognize copyright in the particular expression of, let's say, a news report or a news analysis, but not the underlying ideas. So if it becomes essentially costless to reproduce all the information effectively in a news story without actually infringing copyright, without having any string of let's say four words that are exactly the same outside of a direct quotation, it becomes very hard to on really any model to sell that content if you also have to pay the overhead of the legwork that went into reporting it.
Apart from the displeasure this causes to journalists who like to earn a salary, there is a good sort of society-wide case that if it is not possible to underwrite that kind of reporting, that is bad for us collectively. I think one problem is we're at a kind of nadir of trust in media, so people are a lot less open to that argument probably than they ought to be. But also that it's very hard to find a solution that makes a lot of sense for very good reasons. People are not particularly sanguine about the idea of saying, if the market can't functionally underwrite journalism anymore, the government ought to do it. We understand, I think, for pretty well-trod reasons why that's an unattractive solution, at least as to the primary solution. It might be that in a competitive market, having some news outlets that are subsidized isn't that bad an idea. But I don't think anyone likes the idea very much that most or all news outlets would require government largesse to function. It would be in practice impossible to preserve the necessary independence under that schema.
Then the problem is, well, what is your alternative funding model then? Casting out for alternatives, one answer is, all right, if the problem to some extent seems to be rooted in tech, and the tech companies have a lot of money, finding an argument that makes them on the hook for paying for the process seems attractive to a lot of people. The reality is, though, it's just going to be very hard to, even at a policy level, to prevent, without creating equally bad problems, the harvesting and copying of news content that is already sort of happening in kind of human form with sort of content sweatshops and seems obviously on the horizon at automated scale.
Aaron Ross Powell: So that last point gets to the question I wanted to ask, which is, how new is this and is the wrong thing being blamed for the malaise in journalism? And I should say, I'm very much in agreement that having a robust ecosystem of journalism and a robust culture of journalism is really important. Both of us used to work for a think tank and our work was ultimately parasitic upon, first, academic research that we would draw on and then journalism, you know, we basically would take those two things..
Julian Sanchez: …I was also, I was, I was, I was in fact a journalist for a decade before, uh, before I became a think tanker.
Aaron Ross Powell: Right, so you were producing and then consuming and reworking and so much of content is just taking and remixing and thinking about and analyzing that on the ground reporting. But it's long been the case that hard reporting is not economically viable by itself. Newspapers didn't sell, didn't make money by selling subscriptions to hard reporting, they made money by selling classified ads. Most of a newspaper's draw is like the opinion section, the sports section, the entertainment section, etc. That's what readers subscribe for. If it was just the hard reporting, they wouldn't be subscribing in the first place. None of that is new. As you mentioned, there have long been these kind of content farms that take existing journalistic pieces and lightly rewrite them to publish on various fringey and scammy blogs and so on. None of that is new. And this collapse in journalism, I remember seeing somewhere recently, of the kind of major national newspapers, The New York Times is the only one that's profitable. But newspapers, you the economic model has been collapsing for a while and AI hasn't even really begun yet. These nightmare scenarios haven't hit yet and..
Julian Sanchez: …No, of course, journalism's problems long predate and have to do with factors unrelated to AI. I mean, it's essentially about the loss of the ability to leverage a kind of monopoly on the distribution of large amounts of pulped wood to lots of households and just leverage that, and the attention that came with it for an advertising model. But, that said, the industry is already on the ropes and it's not hard to imagine an additional blow at this point being kind of lethal, given the layoffs we've seen in recent weeks.
Aaron Ross Powell: Is there a worry that if all the attention is on the problems that these models and technologies might represent, rather than the cultural…like we can save journalism if we can just get open AI to stop consuming our content and then regurgitating it, or we can just save journalism if we stop Google from indexing our websites versus these kind of cultural problem of declining trust in, in the institutions and it's hard for the institution to then make a case for their continued relevance and necessity and therefore reasons why you should pay for them in one way or another if people don't trust them. But also, you can tell people that the hard news is like the vegetables. And you can tell people that eating their vegetables is good for them, but they prefer the sweets of the op-ed pages. And that if we focus our attention on...Big Tech has lots of money and big tech is doing this thing so we can blame this thing for the problem, maybe we can extract some money from Big Tech and let that take the eye off the ball of the cultural and consumer preferences, then we're at best kind of kicking the collapsing can down the road a bit.
Julian Sanchez: I think that's right, although in a sense, AI would pose a threat even if you sort of fix the preference or even if people became more civically responsible and understood it’s better for them to consume less listicles and more good long form reporting, you would still have the problem of the sort of free riding of expert systems that can replicate that without violating intellectual property as currently conceived. One answer is a radical rethinking of how intellectual property works so that that kind of derivative production would be understood as infringing that, in a sense, putting an article through an AI that synthesizes and does a rewrite is viewed as creation of a derivative work in a way that the current law probably wouldn't consider it to be. But you run into, I think, again, a lot of problems there.
If it becomes essentially costless to reproduce all the information effectively in a news story without actually infringing copyright, without having any string of four words that are exactly the same outside of a direct quotation, it becomes very hard to on really any model to sell that content if you also have to pay the overhead of the legwork that went into reporting it.
The convenient thing about copyright to some extent as it currently operates is that it is in a sense based on surfaces, right? Whether one piece of music infringes on another does in part depend on whether the composer of the second did actually hear the first piece of music. But when you ask her, well, are they different? It has to do with whether they are essentially different enough to an ordinary auditor that they sound like different pieces. It's not a mathematical or music theoretical definition. It's a kind of ordinary person's response standard. But that at least is in some sense transparent and based on surface features of the work and not in interrogating the process that went into composition and what internally happened in someone's brain as they were transforming and synthesizing their influences.
I will say one thing that we've seen as kind of a response, both at different levels let's say of cultural production, is the very familiar move that we see on things like YouTube to monetizing the parasocial relationship with the audience more than the actual content. So you may have sponsors and you may have ads that run on your, YouTube video, but the way a lot of the stuff works in a way, frankly, a lot of Substackers now are operating is less trying to get people to pay upfront for the content or even trying to monetize via ads the content, but by trying to create a sense of relationship to the human being behind the work and a sense that you're supporting a person whose work you appreciate and not merely consuming the product. And to some extent, I think that leads to some potentially somewhat dubious incentives to create the illusion of a real social relationship that doesn't really exist.
But we're seeing large enterprises sort of emulate that model as well. So, in the same way that a YouTuber might have a level of Patreon sponsorship where the content is free, but also if you're a sponsor, you get to do a Zoom chat with them, or you get to be on their Discord and have some kind of community, maybe even play a game of some kind with them, if they're a game creator. And we're seeing other industries are shifting toward something very similar, where you have The Atlantic and The New York Times increasingly moving toward the idea that the prestigious publication is going to create a kind of aura of desirability around a group of people. And then what you can sell access to is a physical event where you're going to go and interact with those people.
Actually, a few years ago, my partner, we love doing crosswords together, got a gift, a little event with Will Shorts, where a bunch of crossword fans paid some amount to go on a Zoom chat and do a little workshop in crossword constructing. So that was fun, and probably not the kind of thing that's going to fund the totality of The New York Times. But we're seeing a shift toward emphasis, again, both with sort of freelance creators, but also institutions, toward trying to monetize the sense of relationship to a human that, again, at least in the short term, is probably not something AI can copy as well. The question is whether, again, that's enough to fund the operations of The New York Times.
Aaron Ross Powell: I've seen this in fiction writer communities where, for quite a while it used to be you were the writer, you wrote the book, you sent it to your agent, your agent put in front of an editor at a major publishing house, they published it and marketed it, and maybe you had to go and do some signings somewhere, but your job was just to put words on a page. But that has shifted where even the major publishing houses now basically expect the author to do most if not all of the marketing, and you're supposed to be active on social media and building these parasocial relationships in order to sell books. That's not just time consuming it's time that you're not spending putting prose on a page in your typewriter, but it's also basically means that success now selects for features of the author that aren't how good of a book they can write, but instead like maybe how engaging of a personality they are, how good they are at crafting these parasocial relationships and so on. Is there a similar thing that happens, like journalism, it seems like there's a tension between the objectivity of hard reporting and the being a personality, especially if the way that you find success is to inject enough of your personality into your work that people are...I'm not just reading this breaking news story about Trump's legal troubles because it's information packed, but because I like the journalist who wrote it. That would seem to create incentives to just make your journalism more personality-filled. But that might come at the expense of the kind of hard objectivity. Influencers are not generally the people you go to for hard news.
Julian Sanchez: Right. More broadly speaking, it would not be desirable if the only people who were able to have successful careers as novelists or journalists were people who were skilled at making engaging TikToks or even, frankly, people who are compelling on a stage at an event you might pay to see. And also, I think I think there is something to be said for the idea that the decline in trust in journalism is probably… I could go on for quite a long time about what I think the causes of that are, but I think one factor is the rise of social media sort of breaking the fiction of the journalist as a kind of tabula rasa conveyor. It's not that journalists obviously didn't have personal opinions about the things they covered before. They of course always did. But in the social media era, and I think one reason a lot of newspapers got very skittish and tried to kind clamp down on the kind of content it was acceptable for reporters to be producing on social media was, when that's all visible, when it's clear that the reporter does have a personal opinion, whether or not they're good at being fair and I don't know if you want to say objective, but accurate and balanced in their coverage, it makes it I think harder for people to disconnect that and trust that as a professional that's not going to be coloring their coverage.
I understand people's anxiety that the kind of work that a writer or a painter or a composer might do while they're honing their craft is increasingly going to be…hard to justify economically [with AI]….That's unsettling. And again, you know, unsettling in a way it maybe isn't when people hear, you know, gosh, maybe humans having to physically plow the fields isn't going to be a thing anymore.
To some extent, maybe that’s healthy. It is not a terrible thing to be conscious that everything comes from a human's perspective, even if they adopt the conventions of journalism, they talk about this reporter instead of using the first-person pronoun. A certain amount of that is healthy, but the kind of nihilism about journalism that a lot of the public seems to have fallen into, I think is less so, and indeed not justified by the underlying facts. To some extent that's not new. It has always been the case that an author or musician who was strikingly attractive had certain advantages unrelated to the quality of the underlying product. And there's probably a lot of brilliant songwriters who are not particularly photogenic who might like to have their own careers but have wound up writing music for fit and attractive people with a passable voice and less songwriting talent. That is not that is not an entire novelty, but I’m inclined to agree It is to some extent undesirable.
But in a way the point you were making earlier was that it's always sort of been the case that the model for a lot of these industries has been orthogonal to the underlying product, right? So, the model for journalism was, well, it happens that we are delivering this bundle of pulp paper to people's doorsteps on a daily basis, and that drives a certain amount of attention, and so we can make the money off the fact that there's a bunch of businesses that would very much like people to look at their coupon or learn about the new hair tonic that they're releasing. And so we can get more revenue from that than people are willing to sort of pony up just for the value they're getting directly from the creative work or the journalistic work. It’s always been the case that there is both the success is subject to a lot of factors other than the quality of the work in isolation, but also that the way revenue is driven is often a little bit orthogonal to the work in isolation as opposed to the way it functions as a hook that makes something else attractive.
Aaron Ross Powell: Is there potentially then a self-correcting mechanism in this? So even before we get to the question of AI wanting to regurgitate, summarize news, you could tell a story that it's not in, say, Facebook's economic interest for the entire news industry to collapse because one of the appeals of Facebook is as a replacement to like…Both of us remember the Google reader era and I still use an RSS reader to aggregate feeds from a bunch of sources and read them all conveniently in one place. And that's the role that Facebook plays for…
Julian Sanchez: Do you read your audiobooks on wax cylinder too?
Aaron Ross Powell: No, I have upgraded to listening to those things digitally. But, you know, Facebook took on that role for a lot of people as this was a place I could go, one place I could go, and I would see the stories, the most important stories of the day would show up in my feed. Or Twitter played this role for a lot of journalists, their primary news source was their curated Twitter feed or Twitter lists. And so that creates economic value for Facebook or Twitter or whatever is replacing Twitter, because I don't know how many people use it as their primary source of news now.
But if that industry collapsed, then it takes away economic value from them. Similar to, if it shifts to the way that I get my news is I log into Google and have their Gemini AI model tell me what's happened in the world in the last 24 hours, the value of that, to me, depends on how good the data is that Google is able to consume. And so, these people who are being blamed for the collapse of the industry would seem to have a very strong economic interest, if not a civic, democratic values interest, in seeing a healthy news industry producing the kind of content that they're dependent on. And so should we have a relative degree of optimism that because of these financial incentives, even if you or I can't immediately imagine what the new model to pay for it, like it's no longer selling subscriptions because of the opinion pages, it's no longer selling classified ads, those models of subsidizing news don't work, but we can be relatively confident a new one will come along?
Julian Sanchez: I'll confess I'm not that confident, in part because, you know, Meta and Facebook seem to have made the calculation that for various reasons, they do not want to be a major news source anymore. They don't think that the revenue that they can hope to net from people consuming news on their platforms is worth the candle, partly because in a lot of countries you see news outlets trying to effectively get their cut of ad revenue from platforms that are running news content, but also because it brings a lot of political scrutiny and ends up getting people dragged in front of a congressional hearings if the way their algorithm is handling news is not to the liking of one faction or another politically. At least that company has very clearly decided, they've essentially said, look on Threads, on Meta's sort of Twitter clone, we're algorithmically sort of de-emphasizing political content, except for people who actively tell us they want to opt into seeing more of it because frankly it's sort of a hassle. It's not worth it. I think it's an open question whether they're going to step up to the plate.
People are not particularly sanguine about the idea of saying, if the market can't functionally underwrite journalism anymore, the government ought to do it…It might be that in a competitive market, having some news outlets that are subsidized isn't that bad an idea. But I don't think anyone likes the idea very much that most or all news outlets would require government largesse to function. It would be in practice impossible to preserve the necessary independence under that schema.
It's also I think an open question whether, to the extent they find it worthwhile to invest in that, they're going to invest in a way that happily dovetails with the sort of civic motives for which you would want a healthy journalism. Obviously, we already don't have journalism that perfectly overlaps with what you would want the resource allocation to be if what you were maximally interested in was a well-informed democratic polis capable of governing itself in a reasonable way. So perhaps we shouldn't be comparing imperfect real world results to the ideal. Certainly we shouldn't. But we don't know what the kind of journalism they would find it worthwhile to fund for traffic-driving purposes, we don't know how far, let's say, it deviates from the kind of journalism you probably think is necessary for a well-functioning republic.
And again, a lot of this has very little to do with AI. AI is kind of the monosodium glutamate for a lot of existing trends, it just makes it possible to ratchet up the speed and the scale. So, another problem is, it turns out people like news, but they would rather have news that sort of flatters their priors and reinforces their pre-existing worldview or their kind of tribal identification. And so, the economic situation that kind of created the Edward R. Murrow, that's the way it is, style of journalism that presents itself as objective at least, or makes some kind of attempt to approximate whatever objectivity means, was fundamentally about...it's really only sustainable to have a couple newspapers in most cities. There's a limited number of broadcast television channels. You don't want people to flip away, so it doesn't make sense to try and narrowcast. But increasingly now, it is viable to try and narrowcast, and that was true before AI. It's even more true with AI when you can have a kind of bespoke version of the news article that's tailored to precisely your set of friars has some maybe kernel of shared fact, but the emphasis and context is geared toward what is going to maximize your continued engagement with that piece of content. I think if you tell Google or Meta, hey, you can get more engagement out of people if you to some extent subsidize the production of news, the kind of news they produce is going to be very much geared toward maximizing engagement, and the kind of news that satisfies that criteria might be suboptimal on a lot of other dimensions we care about.
The UnPopulist invites interesting thinkers from across the political spectrum to foster a wide-ranging and thoughtful conversation to advance liberal values, including thinkers it may—or may not—agree with.
© The UnPopulist 2024
Follow The UnPopulist on: X, Threads, YouTube, TikTok, Facebook, Instagram, and Bluesky.