In the Age of Social Media, Can We Use the Marketplace of Ideas to Serve Truth Not Lies?
Without content moderation, sites would be unusable and yet that opens them to attacks by aggrieved but powerful political players

Today, we bring you the Fighting Misinformation Without Censorship panel at the Institute for the Study of Modern Authoritarianism’s—The UnPopulist’s parent organization—inaugural Liberalism for the 21st Century conference held last summer in Washington, D.C. (You can check out the other panels on our YouTube channel and on this site.) The conference was a great success, getting rave reviews at Damon Linker’s newsletter, The Dispatch, and The Bulwark.
But this panel has particular relevance at this moment as Meta’s Mark Zuckerberg, under pressure from the incoming administration, has condemned the outgoing one and folded Facebook’s fact-checking operations while pledging to give freer rein to political content. The panelists in fact predicted that something like this might happen given the intense political backlash to admittedly imperfect efforts by social media companies to put in place workable content moderation policies, especially on highly polarizing—yet exceedingly important—issues such as ensuring election integrity and combating the pandemic. This forced the companies to confront inherently difficult epistemic questions about what constitutes falsehoods and censorship that had no clear-cut answers, leaving them vulnerable to attacks and also efforts to “work the refs.”
The panel was deftly moderated by Jonathan Rauch, author of The Constitution of Knowledge: A Defense of Truth and The Kindly Inquisitors, books that stand up against falsehoods and propaganda on the internet and for an absolutist understanding of free speech, respectively. The panel also featured Renée DiResta, a contributor to The UnPopulist and author of the brilliant Invisible Rulers: The People Who Turn lies Into Reality, which unpacks how niche social media influencers are shaping narratives; Katie Harbath, who was Facebook’s director of public policy, leading efforts at election management and advising political leaders on social media use; and lawyer Berin Szóka, president of TechFreedom, a tech policy think tank that is dedicated to studying the legal aspects of the digital revolution and the new free speech dilemmas it raises.
This was an exceedingly nuanced and eye-opening discussion that we encourage you to watch in its entirety, including the lively Q&A section that is not featured in the edited transcript below.
The following transcript of the panel discussion has been adjusted for flow and clarity.
Jonathan Rauch: The title of our panel is fighting misinformation without censorship—three active words, not counting the preposition. We’re going to talk about each of them. First, do we agree that misinformation or disinformation is an actual thing? Or is it just a term indicating a disagreement in which you try to stigmatize the other side? Second, do we agree that it needs to be fought? Is it, in fact, compatible with a liberal democracy, or are we hyperventilating about it? And third, in the context of digital and social media, what do we mean by censorship? Do we mean take downs? Do we mean disamplification? And whatever we mean by censorship, is it, in fact, a bad thing?
Let’s start with question one: Do we think that misinformation or disinformation is an actual thing?
Katie Harbath: I think misinformation or disinformation is something, but it has become politicized. Even defining the two terms—particularly “misinformation”—is quite hard, because if we were asked to give examples, everybody would draw the lines differently. Certainly, what the information ecosystem looks like and how it functions is important for us to think about and study, but it is very hard to use “misinformation” or “disinformation,” because people have politicized it, both on the left and the right.
Rauch: So it is a thing, but we no longer know how to talk about it. Renée? ‘
Renée DiResta: My gripe with it is, I’ve hated the word “misinformation” for a long time now, and that’s because it misdiagnoses the problem: it implies that what we are doing is disagreeing about facts. In some cases, we are disagreeing about facts. But liberalism at least partly means defending the statement that not everything is ambiguous, that not everything is a collectively constructed truth. Vaccines do not cause autism. That is not actually a thing that we should hesitate to say at this point. But one thing that happens on social media is claims that are unknowable in the moment are categorized as misinformation.
I’ve been beating the drum that propaganda and rumors are actually far better frames for discussing what we’re actually seeing. Propaganda is information with an agenda. It’s a word we’ve had for centuries. And then rumors are claims that are unknowable in the moment. This is when we don’t actually have an answer yet. Did Covid originate in a lab? We don’t actually know. We didn’t know two years ago. We don’t know now. When people imply that there is a known truth and that we disagree about it ... the inability to agree on What is a fact? is now a far bigger problem than What is the fact? We’re not having a discussion about the subject matter—we’re having a discussion about the way of knowing itself.
Rauch: That’s interesting. So the term has become so problematic that you would use these two more specific phrases—at least for most cases, since, presumably, not everything is a rumor or propaganda.
DiResta: Right, again, there are the things that are knowable, things for which we have long bodies of knowledge. For example, the Earth is not flat, and to say otherwise is plainly not true.
“The entire concept of the marketplace of ideas was that we would determine what was actually true through a series of conversations. That process of discussion and discovery was part and parcel of coming to consensus, solving collective problems, determining how to respond, or how to create policies. But that concept implies that you have a diversity of perspectives, and you’re going to argue in the same space. The structure of social media does not lend itself to that." — Renée DiResta
But there are other areas—and I think the things that people focus on, particularly in the context of social media conversations, are not the things that are known and established. They’re focused on the things where we’re actually evaluating what to do about something, or what the social implications of an issue are, or what to do in that moment between knowing and not knowing.
Harbath: Another thing that is important to also think about is that often times, when we have this conversation, we’re just thinking about the content, what the thing says, what it depicts, what the image is. And what we also need to be thinking about is who is posting it—the actor. Camille François, who was chief innovation officer at Graphika, has a great framework, the ABC framework, which is actor, behavior, content. All three of those are important to think about. When studying this environment, when you see a lot of platforms taking down foreign interference, it’s actually not based on the content that they’re posting but the fact that the actors are portraying themselves as someone that they’re actually not. Are they spamming? Are they coordinating? Is it a bot? Stuff like that. So it’s also important to expand that out when we’re having this conversation.
Rauch: Berin, any thoughts?
Berin Szóka: So, I would ask why it matters. I think Renée’s book—which I can’t recommend highly enough—does a very good job of offering clarity for the conversation. Renée tells many stories about real people to tease out and provide examples of media theory that is otherwise impenetrable. So that is a way in which it matters what terms we use—because it is important to us to have moral conversations about this as citizens. But as a lawyer, the question I’m interested in is: Does it matter legally? In the U.S., it would matter legally if we were writing laws about this or if we were trying to put these concepts into legalistic terms.
I think we have to be very careful about the tendency to take a hyper-legalistic approach to applying those terms in the context of content moderation, because I don’t think we can operationalize them in the way that people expect when they hear clear definitions. So, it’s often said, ”Well, the difference between misinformation and disinformation is intent. Disinformation is when you know that what you’re saying is false.” Now, in legal contexts, that matters very much—in a defamation context, that may indeed be one of the critical questions in a suit. But I think that it is a problem that, when we start talking about these things, people expect that we can operationalize concepts like this in a predictable, legalistic way and get predictable outcomes when, in fact, the scale at which we are applying these concepts is so vast—we’re talking about billions and billions of posts of content literally every day—that it means we can’t apply these concepts in a rigid way.
Rauch: Is there a better way to define them? Would you switch to other concepts? Would you refine them?
Szóka: I’ll give you a counter-example and then answer your question. The counter-example is Europe, where they actually have put these concepts into law. The European Union’s Digital Services Act is a way of dealing with these problems, and the E.U., hilariously, has a code on disinformation that preceded the Digital Services Act. The first thing that you’re taught to do as a lawyer is to look at what the definition is. So what does the code cover? The first footnote says disinformation includes misinformation and basically every other category. In other words, they did not answer this question. They just dumped everything into this broad term and left it to companies to sort out.
So, my answer to your question is: I don’t want to have to answer the question. It’s a real thing, and I think maybe the most important reason to have some clarity about it is because there is a political agenda that attempts to suggest that all of this is just a pretext for censoring speech that liberal elites don’t like.
Rauch: We’re 3-0 on this panel that although misinformation and disinformation are a thing, the terms are no longer as useful as they once were or as we would like them to be—and they have been, as they say in the academic world, problematized.
Question number two: Do we agree that this thing, which we’re not quite calling “misinformation” and “disinformation” as we once did, needs to be fought? Is it inconsistent with liberal democracy?
DiResta: I would argue that we actually fought it as part of liberal democracy throughout history. The entire concept of the marketplace of ideas was that we would determine what was actually true through a series of conversations. That process of discussion and discovery was part and parcel of coming to consensus, solving collective problems, determining how to respond, or how to create policies.
Rauch: But all very disaggregated. “Fought” implies there’s going to be a campaign, there’s going to be a focused, coordinated effort to stop this thing.
DiResta: Well, I don’t think that works. I mean, this is a chronic human condition. That’s where, again, the campaign against “misinformation,” as a term, does not make sense—the same way that, “We’re going to fight against rumors” doesn’t really make sense. Depending on what time in history you’re living in, “We’re going to fight against propaganda” maybe makes sense—but then the question becomes: How?
I think, right now, a lot of the campaigns should be about helping people to understand what propaganda looks like in this information ecosystem. We’ve seen counter-propaganda efforts in various epics in history, according to the information architecture of the day. This is how we thought about it in the context of the Cold War, or World War 1, or World War 2. In those contexts, you can see efforts to make the public aware of the phenomenon, so that they are empowered to respond within their communities, as opposed to some sort of top-down campaign against a concept, which I don’t think works well.
Harbath: And I think that’s when the word “fought” applies more. It really depends on who we’re talking about, because there are campaigns that put out absolute falsehoods and misrepresent what is in reports. So, whether it’s Renée or others, the response will involve fighting against that and saying, “No, that is not what happened.”
But it’s different if you’re thinking about it from a tech company perspective or government perspective—in that case, I don’t think “fought” is the right term. Like Renée was saying, understanding, and how to think about amplification and the nuances around all of this ... that’s what the frame should be. Because it’s absolutely right that if people are spreading lies and rumors about you, you should be able to fight back—but I don’t think that the government should be stepping in to say that you can’t post this or that.
Rauch: Let me just push you on this a bit. I’m guessing there are a number of people at this conference who would say, “Look, preserving liberalism entails fighting disinformation. There are campaigns being waged against us by foreign and domestic adversaries who are attempting to overthrow the Constitution and the liberal order, and we have to wake up and no longer just count on the marketplace of ideas to sort it all out.” Would you disagree with that?
Harbath: No, I don’t disagree with that at all. But I think that the challenge of doing that is in the assumption that what to potentially take down is easily definable. But what we’re often seeing now, too, is foreign actors are amplifying or just pouring gasoline on stuff that’s happening domestically. So how do you start to try to separate those things out? It gets much more complicated. The other thing I would say is that it’s absolutely appropriate that different organizations and groups and should be thinking about how to fight disinformation and what that looks like. And then the question, again, is how.
Rauch: We’ll come to how, but it sounds as if you’re saying there is a goal here—there is a mission to be accomplished.
Harbath: Yes, I think there’s an understanding of what is actually happening. And then there’s trying to figure out what are the actual levers you should pull to fight it.
Rauch: Berin, any additional thoughts? Should we fight “misinformation” or “disinformation” or whatever we decide to call it?
Szóka: Uh, jain—which is my favorite German compound word, a compound of yes and no. It’s extremely useful. So, it depends what you mean. I mean, case by case, yeah. For example, Dominion Voting Systems fought disinformation about their company. They took Alex Jones to court. They won the largest defamation lawsuit in U.S. history. And they put Alex Jones—hopefully—out of business. They were fighting disinformation. That was good, and I think everyone should celebrate that, both as a particular victory and as a demonstration that the defamation system in the United States can, in fact, function, at least in some circumstances.
“This concept of censorship has essentially been turned into the idea that certain people have a right to use someone else's platform to speak, and they have a right to an audience, and that idea has become central to our culture wars, with one political party having embraced that concept after having opposed it for decades in the context of broadcast regulation, where they were against the fairness doctrine and now essentially want their own bizarro fairness doctrine for the internet, where they can say anything they want with no consequences, and any private company that tries to do anything about that is censoring them.” — Berin Szóka
But it depends what we mean, right? So, if we mean: Should we expect that there’s ever going to be some point where we can achieve some sort of stasis where we don’t have that problem, that’s not going to happen. And it would be inconsistent with liberalism to expect that. I was just re-reading Edmund Fawcett’s book, Liberalism, in which he suggests that one of the four core essences of liberalism is an acceptance of conflict—that conflict is inevitable and we can’t suppress it. What we can do is try to structure it and channel it in a better direction. Here, I would once again point you to Renée, who writes very effectively about the marketplace of ideas image, and suggests that that’s not really the world that we’re living in. How do you describe the digital environment, if not a marketplace of ideas?
DiResta: So, let me take on two points. In past periods—particularly around conflicts, the Cold War, the World Wars—you did see the U.S. government intervening in counter-propaganda efforts, both from the standpoint of counter-messaging but also from the standpoint of creating educational capacity for the public to itself respond. And a lot of times they were not in there getting in the weeds of the specific fact checks—they were saying, “Here is what this looks like, and here is how you can think about it.” This was what Active Measures Working Group did, for example—kind of, “We are going to expose the operation; we are not going to sit here countering all of this point by point.” The Institute for Propaganda Analysis in the 1930s—more of an academic model—did the same thing by teaching people how to recognize rhetoric and tropes, particularly ones indicative of fascism. This was during the time of Father Coughlin and the rise of fascism in Europe. And what you see there, too, is creating tools for empowerment, not playing Whac-A-Mole with the specific fact checks—the goal being that then you’ve empowered the members of the community to compete in the marketplace of ideas by giving them the skills to recognize it and then respond to it. So that’s where I think you do have this two-level capacity of what the role of government is versus educating the community so that the community can do that kind of response.
Where social media is interesting and different, though, is that the structure of social networks, assembled over a period from the “people you may know” algorithms and the early period on Facebook where you originally had your own friends, you came there with your own friends, and then gradually, in order to increase your time on the site, they wanted you to make new friends ... you see a pivot from you engaging with people you actually know in real life around last night’s party to “Hey, you should go join this group over here. We’re going to show you interests. We’re going to build you an interest graph.” And then you see the sort of siloing, the fragmentation around identity and alignment and a bunch of other criteria that an algorithm can use to shunt you into one community or another. And once you’re in those networks, the prevalence of the content that you see reinforces the things that it keyed off of originally. So it’s just a structurally different information environment.

Free Speech Means Allowing Full-Throated Condemnations of Odious Views: A Conversation with Popehat
The marketplace of ideas implies that you have a diversity of perspectives, and you’re going to argue in the same space. The structure of social media does not lend itself to that, with the exception of one platform, which is Twitter, which, ironically, then also incentivizes not so much discussion but gladiatorial combat, if you will, where you’re not trying to win somebody over and persuade them—you’re trying to either intimidate them out of the conversation entirely by setting a mob after them, or you’re just going to be antagonistic jerks going back and forth.
Szóka: My point in getting you to describe that better than I could was just to illustrate that those are all architectural choices that are made about shaping that environment. Larry Lessig famously said that code is law, that the code that shapes digital services really is the law that governs those services, more than law created by the state. And that’s, I think, the conversation that we should be having about how those services are structured in the United States.
In the United States, those are choices for private companies to make. Those are architectural decisions about the exercise of editorial judgment. Some of them might be business practices that could be regulated. Liberalism, of course, is not limited to the United States—in other areas of the world, they’re making other decisions about how to deal with those.
Rauch: Before we get to policy and law and Europe, I do want to go to where we’ve been headed—it’s the culmination of the point that you are all making, which is: Okay, so, censorship. What do we mean by censorship in this context? Is it the word we ought to be using for what we’re talking about? The topic says fighting misinformation without censorship—what is censorship, in this context? Is it bad? Berin, you’re welcome to go first.
Szóka: I took five years of Latin and love the Roman world. The “censor” was one of the offices in the Roman Republic whose job it was to enforce morals. That's what the censor did. It wasn't just suppressing speech—it was like the moral police in Iran today. It was responsible for sanctioning people for moral violations, and he was exercising the authority of the Republic. It was a public matter that was decided by the state. That's what censorship is. This word is now used by people, sometimes innocently, to refer to their post being taken down. They see that as being censored. That's a very sloppy way of thinking about a term that has a very particular meaning in the United States, which has to do with the government taking down speech.
Now there might be circumstances when the government coerces private actors to take down speech, and that would be subject to the First Amendment. The problem is that the allegations that have been made thus far about censorship by private actors ... no one has provided evidence that such instances rise to that level. If they did, I would be open to seeing that as constituting a First Amendment violation. Instead, this concept has essentially been turned into the idea that certain people have a right to use someone else's platform to speak, and they have a right to an audience, and that idea has become central to our culture wars, with one political party having embraced that concept after having opposed it for decades in the context of broadcast regulation, where they were against the fairness doctrine and now essentially want their own bizarro fairness doctrine for the internet, where they can say anything they want with no consequences, and any private company that tries to do anything about that is censoring them.
Rauch: So, even in principle, is there anything Facebook, a private company, could do that, in your view, would qualify as censorship?
Szóka: Yes, if the government were behind that decision. If they were coercing a private company to take down speech that the private company did not want to take down, that would be censorship. Censorship of this sort is something that can be done through a private company. It happens in other countries—the Indian government, the Russian government. I mean, it’s a real problem around the world. I don’t see any evidence yet that it’s happening in the United States.
This was just litigated: The Supreme Court just decided a case about this called Murthy. The allegation was that, when tech companies asked the surgeon general for examples of misinformation about Covid, merely responding to those questions essentially amounted to a form of coercion or an exercise of state power. And the Supreme Court said that that’s not adequate, that you don’t have standing to sue in those circumstances. And they left open the question of where the line gets drawn in the future. So we’ll see.
Rauch: So, if Mark Zuckerberg decided tomorrow that from now on, his policy—which, by the way, he doesn’t need to announce—is to take down anything that is pro-MAGA, that wouldn’t be censorship.
Szóka: By definition, no.
Rauch: Katie, what do you think about this?
Harbath: I agree. By the way, it occurs to me that we’re disagreeing with every word in the title of this panel, which I kind of enjoy.
Szóka: We haven’t gotten to “with” yet.
Harbath: But yeah, “censorship” is another one of those terms that has become politicized and has become way too broad. It can mean: How different entities handle content on their sites, the different rights private entities have versus the government in terms of coercing. Just thinking about content moderation in general, the amplification of content, the labeling of content ... there’s so many different things now that are done and being decided on by companies that I don’t think fit within the technical definition of censorship.
Rauch: So, Renée, suppose someone says to you, “Well, the First Amendment lawyers are right. Technically, it’s not censorship if Facebook decides not to publish opinions on one side of an issue or another.” But let’s face it, in the real world, there are only a small number of these very large platforms. It has the same social effect as censorship. It amounts to the same thing. We should call it censorship, because in practice, that’s what it is. What do you think about that?
DiResta: Well, I mean, that’s where the public conversation has gone. I don’t think we’re coming back from that. We can debate the legalistic nature of the term. I used to do that myself in 2018 when I would have these fights. Now I just say, okay, what we’re actually talking about is people who feel that this is an affront to freedom of expression, even if they choose to use phrases like “freedom of speech,” which have a legal connotation or a constitutional connotation. What they’re upset about is a feeling that they are somehow not receiving the same right to expression as other people. So what I have generally tried to do—again, since around 2018—is just engage it via the values debate, as opposed to the legal argument, which, I agree with my two panelists that, legally speaking, that’s the wrong word.
I’ve had a very interesting experience with this. I was declared the face of the censorship-industrial complex in sworn testimony to Congress. I have been outspoken on these issues for a very long time—the “freedom of speech, not freedom of reach” argument that we started advancing, me and Aza Raskin in 2018, was basically that platforms should in fact continue to carry various types of speech that were being moderated, but that they didn’t have to be promoted; that there was a difference, from an architectural standpoint, between allowing something to exist on a platform and proactively recommending it, or allowing a group to exist on a platform and accepting ad money from that group to promote their content. I saw that as the dividing line, the differentiation between carrying speech and amplifying speech. And, again, in the architecture of the day, that is the thing that most of these people are actually fighting about. What they are actually looking for is not just that a platform carry their content—it’s that the platform amplify their content.
There’s a lot of ref-working around that. What happens on social platforms is that all content is curated, all content is ranked, all content is sorted in some way before you see it—which means you’re not seeing a reverse chronological feed of all of your friends’ posts. Interestingly, people personally experience that as censorship. In 2018, when I would talk to people on Twitter about this, I would ask: “Why do you think you’re being censored?” These are accounts with, like, 20 followers. “Why do you think the platform has singled you out, person with 20 followers?” And they would say, “Because my friends don’t see all of my posts.” So curation and ranking and engagement based metrics, the actual mechanics of social media, were inscrutable to them, and they interpreted that lack of reach as censorship.
This was then picked up by people who wanted to make that a winning issue, the propaganda campaign around censorship—which, again, I argued for, let it stay up and label it; let it stay up and throttle it. In some cases, labeling got redefined to censorship; fact checking got redefined to censorship; throttling got redefined to censorship. And any structural impediment or barrier to the reach they thought they were owed was redefined as censorship. That’s where that word has gone in the last three years, because it’s very effective as a tool of power and as a tool to galvanize a base at this point.
Rauch: So where we are, apparently, is that of the three active words in this title, all three of them have been politicized and problematized. But it sounds like there is general consensus up here that there is something to be worried about that’s happening online, involving information that measures do need to be taken in some ways to cope with it, but they shouldn’t be heavy-handed, top-down measures, and that the concept of censorship, per se, is no longer the kind of useful concept that it might have seemed to First Amendment scholars 10 years ago.
So, maybe the next logical place to go is content moderation. That’s another way of talking about it. It’s, of course, not censorship, necessarily, to moderate content—but that, too, has been problematized. What are your feelings about the content moderation argument? What works, what doesn’t work, or do we have to give up on that, too?
Harbath: I think that there’s a lot of stuff for us to unpack here. One first thing, when we’re talking about the policies that platforms have, is that what is or is not allowed on the platform can be both about the content but also the behavior. So, a lot of times people would be like, “Facebook censored me. They took down my content because I was talking about x, y, and z issue.” And we'd look into it. And we concluded, “No, you were posting it in 100 groups over the period of five minutes. That's also known as spam.” So we were like, “We actually took action because it was spammy behavior, and that's what the system saw. It wasn't necessarily about what you actually said.”
Platforms have these policies of how they try to define some of this stuff. Then, what often gets lost is how well these platforms can enforce those policies. That’s where it can become challenging, because not only is it the scale at which it’s happening—it’s the number of languages that it’s happening in. It is much easier to do this when it’s text based content versus video or audio or is it being livestreamed. And many times people will be like, “Well, wait a second. Facebook has this policy that says x, y, and z is not allowed, but I’m finding it on the platform. Facebook, is thus not enforcing their policies.” Actually, no, maybe we just didn’t catch that one. Or there’s a lot of borderline content that we’ll talk about, where, again, if I showed some borderline stuff here in the room, I bet there would be differing opinions about whether or not it violated a platform’s policies. And one of the things that I think platforms did a very bad job of in the early days was making it seem like we were 100% accurate when we had these policies and said, “Oh, we are not going to allow hate speech on our platform” and didn’t actually talk about the challenges of being able to do that.
“If people are spreading lies and rumors about you, you should be able to fight back—but I don’t think that the government should be stepping in to say that you can’t post this or that.” — Katie Harbath
And then also we’ve moved beyond the binary, leave it up or take it down conversation. That does still happen, but a lot more of it now has to do with how much amplification does it get. Does it get shown to people if you’re following the account? Does it get shown to people if you’re not following the account, but you’re interested in that particular topic? Is it in a group format versus a page, versus a personal profile? The labels that Renée was talking about: you saw a lot of platforms doing that around 2020 both with Covid and election content, but now you’re seeing them pull back on a lot of that because people were actually starting to get annoyed at how many labels were in their feed. And if you were to start labeling everything, your entire phone feed would just be labels.
And then there’s the question of: How effective are those labels? Does someone seeing something that doesn’t have a label then make them think that it’s either true or false? Do they just start skipping over that content? All of these things complicate things further.
Rauch: Plus even throttling now is treated as censorship.
Harbath: Even to that point, too, one of the things I remember with this on Facebook, and you see it with any platform that has a feed-like system, is as more people and more businesses are getting on these platforms, there’s a finite amount of content that any of us can consume in a day, and it will always get to a point where you could sit on that app all day, every day, and you still wouldn’t be able to consume all the content that you’re potentially eligible to see. So the platforms had to start configuring the algorithms to try to show you the things they thought you were interested in. And a lot of people were like, “Wait a second, I’m not getting as much reach anymore.” And they assumed it was Facebook purposely deciding not to show that to people. Sometimes that was, but sometimes it was that you’re competing against a lot more content, so there are a lot more factors that are going into it, of deciding whether or not somebody can see what you posted.
Rauch: So, I’m your libertarian friend, and I say, “Look, Berin, why bother with trying to regulate what people say? Just let them say what they want to say. Let’s have free speech.”
Szóka: It doesn’t work. I mean, the platforms would be unbearable. Renée, do you want to answer that? Because you have a great description in your book about what the gladiatorial arena looks like and how we wouldn’t tolerate in the real world people running down the street screaming after each other.
DiResta: I am not a First Amendment scholar or a lawyer, but one of the things that I’ve been really captivated by is the idea of time, place, and manner restrictions related to civil unrest in city centers and places like that. And the idea that, if you had hordes of people attacking the main character in your physical space, like in your city park, we would see that as an extraordinary breach of norms. That would be seen for what it is, which is bullying, harassment, and attack. You can’t chase your neighbor around with pitchforks or bull horns or anything else. There are rules that we have in physical space that don’t necessarily translate well online, but in a sense, the effort to create norms, which is what a lot of the moderation rules are trying to do, is to engender particular types of norms. You have to have that, otherwise it becomes an unusable space. That’s the reality of it. And every single platform has confronted this and realized it at some point.
Szóka: If you were organizing a dinner party, or if you were a bartender, you would just expel people for misbehaving. I mean, this is just not hard to see. Basic principles of decency and how we interact with each other have to have some effect online, and they can't be implemented exactly the same way. They have to be implemented at scale and quickly and imperfectly. What Katie described is what Mike Masnick at Techdirt has referred to as his Masnick’s Impossibility Theorem, that it is impossible to moderate content at scale in a way that is consistent and predictable. That’s what I was referring to earlier about being legalistically satisfying. It just can’t be done. A very large part of the problem in this debate is the expectation that our moderation efforts will be perfect and consistent. And if they’re not perfect and consistent then that's censorship and should be subject to some kind of legal remedy.
The second thing I would point out is the point that Renée made about the lack of understanding of how these services work is not new. That’s true for any technology. Arthur C. Clarke, the great science fiction writer called this Clarke’s Second Law: that any sufficiently advanced technology is indistinguishable from magic. And while we might think about magic and Harry Potter in very positive terms, mostly, magic is something people got burned at the stake for. Magic is a dark, unnatural force that, throughout most of human history, caused bad things to happen and caused people to riot and to take out their vengeance on someone. And that’s essentially what’s happening with content moderation. People don’t understand why it’s necessary, how it works in general, how it works in a particular circumstance, and so they are very mad about it. And when someone tells them that they are being censored, and there’s a cabal behind it, and the cabal is being run out of San Francisco, and there are people who don’t share their values, or it’s being run by CIA mom right here next to me, then you know they’re going to go after her and try to shut her down and maybe show up her at her house and threaten violence.
Rauch: So there’s a wonderful throwaway in your book, Renée, about how InfoWars has terms of service. Alex Jones has all the same trust and safety and behavioral constraints on his website that everyone else does.
Szóka: And, if I may, the only thing that I would have added to your book is that not only does InfoWars have that, but all of those alt tech sites have that. And what’s really remarkable is they’re even vaguer—they retain even more discretion. And some of them, and you note this, will go out of their way to say, “Well, you know, we’re a private business, and we get to make our own decisions. And if you don’t like it, tough.” I mean, that’s how this works. There is a problem of unaccountability here. And the response to that in the legal-theoretical scholarly community is essentially what’s called digital constitutionalism: the idea that there has to be some way of governing this private power. And there might not be state power—it’s not the First Amendment in the United States. In Europe, the response might be legal, in part, but there has to be some framework that governs the exercise of that power. I think the interesting question is: What does that framework look like? Who are the other decision makers? How do you build in accountability?
So our friend Kate Klonick, for example, embedded as a law professor inside Facebook and helped them develop what is now the oversight board, which provides an outside place to go to raise questions about how a policy was implemented. And they are functioning as a kind of Supreme Court for Facebook and Meta and other services. That’s one way to deal with the problem. Meta right now is experimenting with a different layer of the problem. So, the oversight board is checking decisions after the fact. Meta is experimenting right now with evolving community, in members, in voting on or being consulted about changes to the policies up front. Those are concrete ways in which you can bring more accountability to this process. And I think those are good things. But that’s where we should be driving this conversation.
And to return to your original question, we’ve been talking about users, about people who are complaining about content moderation, about the platforms themselves. We haven’t talked about the other side of the market, which is advertisers. I mean, these services are private, profit-maximizing services whose business depends on advertising, except for X, which has driven all the advertisers away for the very simple reason that advertisers who are not Mike Lindell and My Pillow don’t want their brands shown next to hate speech and people encouraging others to commit suicide and go down the list of horrible things that are out there that would flood these services if it were not for content moderation.
So, that’s why we need it, to answer your earlier question: users don’t want to see that content, and brands don’t want their products shown next to that content. And they’ve taken principled positions to oppose that kind of content, and they want to be assured that their content is not going to show up next to that sort of material. So that’s the mix of forces that are at work here.
And you know what? From a liberal perspective, that’s good. That’s the market at work. You have civil society getting involved. You have economic forces getting involved. We find new ways to check power, and we iterate on that. That is liberalism at work. That’s the opposite of what is currently happening with people who are trying to shut down the process of content moderation.
Rauch: Let’s push back on that point a little bit. I’ll impersonate someone. So, it would be nice to live in a world where markets would solve this problem, and we’d have lots of things like intermediate layers, and people could choose their own filters, and we’d have lots of different communities and just leave it alone. But that’s not the world we live in. So, this argument goes, there’s going to have to be more than that, there’s going to have to be some sort of regulation, there’s going to have to be guidelines, and it’s going to have to be top down, and it doesn’t need to look like Europe, but it should look like something else. Anyone have thoughts on that?
Harbath: Well, this is where I like Lessig’s framework that was brought up earlier. So, not only is there the code layer, but there’s also the role of regulation, the role of markets, and then the role of societal norms. We’re in the process right now—and we have been basically since 2016—of redefining all of those and what that looks like. And we’re still in the middle of that very process. So you’re going to need movement, I think, in all of those different areas. We can’t rely on just one to actually solve the problem.
Rauch: Should there be a bill, a law, regulation? In your view?
Harbath: My view is that where we should be moving this conversation is around transparency, around not only what the platforms are doing, but what governments are doing. And then the other is trying to make sure we’re building mechanisms for checks and balances, because nobody wants just the platforms in charge. Nobody wants just the governments in charge. Nobody wants any one single entity in charge. But we need to have ways to make sure that they can hold one another accountable for the decisions that they’re making. And, again, we’re in the middle of the process.
Rauch: So, Renée, one of the wonderful things about your book is that not only do you resist the temptation to have a big villain, you resist the temptation to have a big solution. I will tell the audience, there’s so much richness in the policy section of Rene’s book, but it’s not any one thing. It’s lots of different layers of society. It involves everything from civic education to decentralizing platforms to more different forms of filters, greater transparency. We won’t get into it all. Just buy the book and read it because it’s the right way to think about it, which is a lot of different stuff by different actors.
“It sounds like there is general consensus up here that there is something to be worried about that’s happening online, involving information that measures do need to be taken in some ways to cope with it, but they shouldn’t be heavy-handed, top-down measures, and that the concept of censorship, per se, is no longer the kind of useful concept that it might have seemed to First Amendment scholars 10 years ago.” — Jonathan Rauch
Instead, I want to wrap up with the other dimension that is going on here. You have been in the bullseye of this. This is what’s become known as anti-anti-disinformation, which is targeted efforts by political actors using both social media itself and the government in the form of Congress and congressional oversight to target individuals and private organizations that are attempting to monitor, identify, and call out misinformation, disinformation, rumors, propaganda, whatever you want to call it. You have experienced this head on. I think it’s fair to say that the organization that you helped to found and for which you work, the Stanford Internet Observatory, is either going out of business or through some kind of profound change that does not involve you as a result of this. What is happening with anti-anti-disinformation, and is it going to make it all but impossible for platforms to continue to do whatever it is that they can do to push back and create a safe environment?
DiResta: So, for those who don’t know, the subpoenas began to come down for Stanford internet Observatory in March of 2023. So, Jim Jordan’s weaponization committee, Dan Bishop’s Homeland Security Committee, subpoenaed all of our communications with government, the executive branch, or with the tech platforms going back to 2015 (except we started in 2019). It turns out, the interesting dynamic there was that, ostensibly, this was to investigate this vast cabal by which the government was telling us to tell platforms to take things down. Now, there’s been zero evidence found of that. They then started to move the goalposts when they didn’t find what they wanted. The investigations are now ongoing. I think Jordan sent another letter in response to the announcement that SIO was dissolving or refocusing, basically reiterating that they were still under investigation, and, you know, he was still overseeing them.
The piece of this that is interesting, though, is that I think we were the first to get the letter in the subpoena, but about 100 of these have gone out. This is a very, very broad, expansive campaign and broad, expansive witch hunt. And in addition to us, the platforms also got subpoenas. I don’t know 100% if they got subpoenaed—they had to come in for voluntary interviews. Their employees had to come in. What happens when you do a voluntary interview? For those who don’t know: they put you in a room with a video camera. They ask you questions for seven hours. You don’t get the transcript. You don’t get the video. They do. You can potentially intuit what happens with those transcripts and those videos, which is, they take two sentences out of them, and then they take the two sentences that they want, they re-contextualize them, and they write these reports that are not rooted in reality—very, very cherry-picked stuff. Or they leak them—they leak selective excerpts with the intent to send a mob after you. This is a huge problem, because most of the work that we did—studying the election or the vaccine conversation—was done by students, which means you also have to negotiate redactions and things like that, because you don’t want mobs coming after 20 year olds.
In addition to this, though, the lawfare then begins, and material that we turned over to Jordan under subpoena was turned over to Stephen Miller and America First legal, who is suing us. Normally, in order to get documents, the lawyers have to fight and you go through discovery and so on. But when you use your subpoena power to obtain documents, and then they wind up in the hands of the people suing you, that’s something of an abrogation of norms—it turns out, though, apparently not illegal. And the one thing that a lot of these actors have in common is that they were the actors who tried not to certify the 2020 election. I think it’s very, very important to note that the congressman doing the subpoenaing, the attorneys general doing the lawfare, Stephen Miller, and the right-wing lawsuit mill machines, the one thing that they have in common was this effort to overturn the 2020 election, or to not certify the 2020 election. And a big part of the reason to do this is because they consider it as positioning them in the best possible way in 2024 because the teams that investigated false rumors and propaganda and election misinformation in 2020 are no longer doing that, and that is incredibly advantageous to them.
But now that this has unfortunately worked, you’re going to see it happen over and over and over again, in which any field that does any kind of work that is considered unpalatable for people who have gavel power in Congress are potentially going to find themselves in the hot seat the way that we have. The platforms have employees to worry about. They also don’t want to be in this situation. And so backing off to avoid being hauled in front of Congress constantly is a thing that has happened to them as well. It imposes profound costs, both physical and financial.
Szóka: This is just one part of the lawfare involved here. So, she’s just described the war on research. There’s also just been a war to bend the way that these services work, not only against content moderation, but also to steer curation in favor of the MAGA agenda. And this started, I would say, in March of 2016—Renée tells this story very well in her book.
It’s a “trending topics”-gate, where Twitter had a very successful trending topics box, Facebook tried to copy that feature, and in order to make sure that there wasn’t complete lunacy in the trending topics for any particular category, they had human moderators who were checking what the algorithms would pick out as being the trending topic, and a bunch of Republicans got very mad. A Republican made enough of a fuss about it with a single letter demanding to know, “Who are you hiring to do your human curation here? And how are you assuring ideological balance? And how can you assure us that these are not all San Francisco elite liberals” and so on. Facebook caved, and not unreasonably so, under pressure. I mean, it’s understandable why they did. But the reaction, or the change that they made to get Republicans off their backs, was to remove the human content moderators. So that feature, during the critical part of the election, was then just done algorithmically and was open to manipulation.
DiResta: I was reading the science section, and a witch blog that was talking about mercury and retrograde was a top trending story in science. And I had a friend at Facebook data science, and I sent it to him, and I was like, “Maybe the pure algorithmic thing is just not working.”
Szóka: And that’s essentially what happened on politics. And that had a very, very direct political result and helped Trump win the White House in 2016. That is simply an example, but it was the original example that inspired Republicans to continue waging lawfare against private companies to get what they wanted and to work the refs in their favor. That continues, and it will not stop.
Harbath: And it actually goes back in history even further with the original Google-bombing. If any of you remember the early 2000s and what happened if you Googled “miserable failure” … the George W. Bush White House link came up. And then if you Googled “Rick Santorum,” some very unfortunate websites against the senator came up on there, and Republicans were very upset. There was always some underground rumblings from Republicans about how platforms were deciding how to rank content that was always underneath the surface, and it would flare up now and then. Like when Santorum was running in 2012, it kind of popped up. When Obama ran in 2008 and a lot of people from the tech industry went to go help Obama run his campaign, there was a lot of that. And then, yeah, the trending topics thing really blew it out in the open.
Rauch: So, looking ahead in terms of where we are today, if I’m a university, and someone comes to me and says, “I want to do research on disinformation, misinformation online. I want to look at what’s going up,” or if I’m a social media company, and someone says, “I want to do some content moderation,” don’t I just say, “Those days are over, that stuff is too hot to handle. We will get sued, we will get denounced, we will get subpoenaed. We’re not doing this anymore”? Is that where we are?
Harbath: The vast majority are in that spot right now saying that it’s just not worth it. I write a lot about how platforms in particular can try to run from politics, but they can’t hide. And so they may be trying to get away from it, but I think it is going to constantly keep catching up to them, because people want to talk about politics, particularly in an election season, and you can’t just completely disengage yourself from it.
Szóka: If I can just be a little more concrete about what I think you’re saying, you can’t run these services without content moderation. That’s literally not an option. And if you tried to do it, the services would become a cesspool that people wouldn’t want to participate in and that advertisers wouldn’t want to advertise on, which is what Twitter is turning into. So that’s really not a viable option. What Meta has instead done has been to just basically try to turn off politics. And I think that’s what Katie is referring to—that that’s also very frustrating to people.
There’s a legal backdrop behind this, which is that the Digital Services Act in Europe has gone into force. I’ll spare you all the details, but the most important part is that it requires the covered large platforms to assess and mitigate their systemic risks. And those systemic risks include dealing with a variety of problematic forms of content, some of which are things we would all agree are harmful, some of which, though, are totally undefined, including civic discourse and electoral processes. I don’t know what that means, but I know what Donald Trump thinks that interference with electoral processes and civic discourse means, which is just a way of saying that in building these legal responses in Europe, and especially in other countries that are trying to copy Europe, those countries need to be thinking much more carefully about what these legal tools would be used for in the hands of authoritarians who are on the rise in Europe and are certainly in control of the apparatus of government in countries that are copying or purporting to copy the Digital Services Act. So this lawfare is not unique to the United States—it’s arriving everywhere else around the world. This is just a core part of the culture war that is 21st century politics.
Rauch: I’m forced to wonder if all of our nice talk about how all the nice things that nice liberals want to do to help clean up the internet environment are now hopelessly out of date because of all these other obstacles, anti-anti-disinformation and so forth. I hope that’s not the case.
© The UnPopulist, 2025
Follow us on Bluesky, Threads, YouTube, TikTok, Facebook, Instagram, and X.
We welcome your reactions and replies. Please adhere to our comments policy.
In the Age of Social Media, Can We Use the Marketplace of Ideas to Serve Truth Not Lies?
No.
Interesting conversation that dances around the central issue: is government-mandated censorship okay? All Americans should be able to agree that's not only wrong, but it's illegal.
It's also telling that the conversation never mentioned the Russia Collusion, "Fine People" and related hoaxes that dominated national discourse for years. I think these are archetypes of misinformation... promoted incessantly by the government-media complex.
As for the EU? The EU's GDP has cratered, so they have no business telling anyone how things should work.
It's time to end government censorship around the world.