Can AI Heal the Divides That Social Media Created?
It has generated very promising results in Taiwan and elsewhere to help build consensus on deeply polarizing issues

Today, we’re thrilled to share the full video and lightly edited transcript of LibCon2025’s breakout panel, “AI and Liberal Democracy.” Deftly moderated by Mike Masnick, founder and editor of Techdirt, the panel featured Saffron Huang (saffron huang), researcher of social impacts at Anthropic and co-founder of the Collective Intelligence Project; Clay Shirky, vice provost for AI and technology in education at New York University; and Puja Ohlhaver, researcher at GETTING-Plurality at Harvard University’s Allen Lab for Democracy Renovation.
(We apologize for the sound quality of this audio but we have extracted a very accurate transcript, which will help to follow the conversation.)
To explore other sessions from the last two ISMA conferences—LibCon2024 and LibCon2025—go here.
Mike Masnick: Thanks, everyone, for joining us. I think this will be a very interesting discussion, based on the pre-discussions that we’ve had. Just to start out with some level-setting, there have been all these discussions about the role of AI and, more broadly, the tech industry, and how it impacts democracy today. There is an argument going around that tech in the last few years has actually been almost anti-democratic, and has been part of the reason why democracy is under threat, if it still exists at all. I fear that kind of thinking, and that idea that tech is inherently anti-democratic—in fact, I think we really need to be thinking about where tech and democracy connect to each other in important ways. You can’t think about those things as separate—technology and democracy are deeply, deeply intertwined. And that is true of AI technology as well.
One of the things that we’re discovering right now is that when any sort of technology is controllable, whether it’s by billionaires or companies or the government—and right now there’s an argument that those three things are the same, I was going to say “for better or for worse,” but it’s for worse—that is where the problems come in. But when technology is actually empowering for the users, it becomes essential to democracy.
Audrey Tang, the former digital minister of Taiwan who’s always very thoughtful on this stuff, made a comment years ago at a conference saying that the internet and democracy are not two things, they are one and the same. In Taiwan, specifically, she was talking about the timing of when the internet and democracy arrived and how they’re now viewed as the same thing. And I think that’s true of us as well: Used properly, it is a very empowering tool for democracy; but used poorly, or used under the control of billionaires, companies, or governments, it could be very problematic. Figuring out that balance is really important.
I’m going to start with a question for each of [the panelists] to give them a chance to give their view on these things, some of whom may disagree with what I just said, which is fine, and then we’ll go into a bigger discussion.
I’m going to start in the middle with Saffron. When we were discussing things to talk about, you sent over a piece that you had written a few months ago for Noema Magazine, which was one of the most fascinating things I’ve read in a long time. It really got me thinking, and I’ve been telling people about it, so I want to start out by asking you about it.
The premise you had in that article was that, as people are thinking about AI and what its impacts will be, there’s been a lot of discussion about the after-effects: if AI destroys everybody’s job we’re going to need something like UBI (universal basic income). You make a very interesting argument in that article that that’s maybe the wrong way to think about it, that that is a redistribution effort that comes after the problems have come about. But what if we were thinking about a pre-distribution—the phrase that you use—to give people more access to the technology themselves at the beginning? So I wanted you to talk about that a little bit in terms of how you think that framing impacts the question of where AI and liberal democracy intersect.
Saffron Huang: Thanks so much, Mike. At the end of the piece I talk about how a lot of the initial ideas that people are throwing around for how we cope with AI’s potential impact on the economy or on labor markets don’t think about this question of power or don’t really see it as an opportunity to actually benefit more people. A lot of the tech leaders maybe throw around the idea of UBI. Or there’s this idea of the windfall clause, which is basically a proposed opt-in legal mechanism: say an AI company, once it gets to a certain size—and the figures that they throw around are like 1% of world GDP or something, which is already insane and bigger than the biggest monopolies we’ve seen in the past—they will then donate a certain proportion of their profits to charity in some way to help out the rest of the world.
I see UBI as similar, at least in the way that people conceive of them right now. They’re like, “Okay, well, I guess everyone will just be a welfare state dependent,” and this is really problematic for a lot of reasons. What are the incentives of the companies to continue honoring that contract, if it’s even necessary to? And what does this do to people’s role in society as citizens, as feeling like they can contribute, and having any say in their material environment? Being part of our economy, and making decisions that are relevant to it, are ways that people vote, in a sense, on what they want to see in the world.
I find those approaches very disempowering, and the thing that I and my co-author were proposing was to think about how we can make sure that earlier people have access—educational access to things like AI literacy, the financial capital to access AI, and also on a more collective or city or government scale, thinking about financial instruments to share the wealth of AI with citizens and having more cooperative ownership arrangements.
One thing here is that a lot of problems right now, at least in western countries like this one, around scarcity and inequality … technological revolutions can, depending on how they play out, really exacerbate that. But they can also be a way to tip the playing field and adjust the direction that society is going in. AI is potentially a way of sticking a lever in the current situation and trying to move things a little bit, just in terms of where power is going to lie in the future. So thinking about these mechanisms, and thinking about these ways we can get ahead of that, might hopefully lead to better outcomes.
Masnick: Clay, there’s a theme that is going to come up a lot in this discussion—it certainly came up a lot in our planning discussion—which is that the technology can be used for good or bad, or could create good or bad situations. And one of the things that you raised is a fear that AI is going to guide people into smaller and smaller filter bubbles. We’re all going to end up in our own little worlds with the AI telling us that we’re always right and whatever we think must be true, because the AI has this habit often of, as it’s commonly referred to, “glazing” people. I know that you’ve been thinking a little bit about what we do about that, so as an initial provocation, how do you combat that world where everybody lives in their own bubble, which I think is something that people are very concerned about as it then relates more broadly to democracy?
Clay Shirky: I have less on what we do about that than characterizing the problem right now, unfortunately. But the Enlightenment conception of human beings as these rational, optimizing truth-seekers starts to take body-blow after body-blow in the second half of the 20th century as you get psychology, sociology, economics, constantly demonstrating that we are emotional, satisficing, motivated reasoners. We are, as a species, given to believing things that affirm our viewpoints more than we believe things that contradict our viewpoints, without regard to to factual accuracy. And [that also] delivers a body-blow to democracy: It’s no longer this kind of Hayekian aggregation of well-thought-through policy preferences by highly engaged individuals; it’s a way of saying to politicians, “You do it, I don’t want to get into a fight over allocation of scarce resources but somebody’s got to be on my side, so I say it’s you.” And this system, if it works, keeps politics from sliding into either authoritarianism or anarchy, which is to say, sliding into the Scylla and Charybdis of the state having too much power or the state having too little power.
So what happens when you drop AI into the middle of that? The fateful choice with AI—the current generation of generative AI tools since roughly end of 2022—was to do the last 10% tweak on the language model, something called reinforcement learning through human feedback. We show people: “What do you like better?” “Do you like this one better or do you like that one better?” over and over and over at incredible scale to gradually tune the tools to give answers that people like. And if we were rational, optimizing truth-seekers, that would have led these tools towards more factual accuracy, and more saying, “I don’t know the answer” or “there are competing answers.” But because we’re not those people, we get glazing and sycophancy instead.
So what you’ve got with AI currently is this way of building your own hermetically sealed filter bubble. What’s coming is AI for groups. In my work on social media—and I’ve seen it over and over—Silicon Valley will ship a tool optimized for individuals, underestimate the effects when groups start using, and then those effects reveal themselves. AI right now is this thing I use behind my back, and then I say, “Lo and behold, I asked ChatGPT (or whatever), and here’s the answer.” But with retrieval augmented generation, which is a way of saying “stick to these documents,” and with better sharing models coming out with these tools, and larger context—when there’s more availability of context—we’re going to go from AI as a personalization tool to AI as a socialization tool. You can have an AI that knows that it’s only talking to members of the Girl Scouts of America or the Democratic Socialists of America or the Nazi Party of America.
This is where you get this fork in the road. One way you could build those tools—and this is actually relevant to some of what Saffron has done with Anthropic—is to push them in the direction of pluralism; when it sees members of a group having different opinions, to surface and try to validate those opinions and say, “You ought to either debate this or you should decide that you can tolerate a range of views within your group.” But another way to do it is just to hide the fact that you disagree, and for the AI to start producing language that papers over or masks places where there’s internal disagreement among the group, because you’d be optimizing for emotional cohesion rather than for intellectual cohesion.
Given the market-driven incentives with current AI companies, and most especially OpenAI, which has in less than a thousand days become one of the 10 most trafficked websites in the world because it tells people what they want to hear. I mean, even Wikipedia doesn’t do that, but, boy, if you want to be told you’re right, ChatGPT, that’s your go-to. The market incentives for OpenAI are almost entirely directed towards telling groups what they want to hear. And I start from enough of a pluralistic place that I think that’s bad for democracy.
I think you need outside forces, nonprofits, and, ideally, governments that would care about the health of democracy, that would push towards group AI at least having the opportunity to surface pluralistic disagreement as opposed to masking it.
Masnick: I was going to say, one thing I’ve noticed about some of the newer models is they’re not as quick to glaze, and I had a very interesting experience which I’ll talk about very quickly before going on to the next question. I use various AI tools as an editor—I write everything first and then I ask it for an opinion—and for a while it was sort of, “Oh this is a really thoughtful piece.” Then, when Claude 4 came out, the first time I asked it—I wrote this whole piece—it said to me, “You go way too deep into the legal weeds.” And I have it actually trained on a bunch of stuff that I write, so it knows that it’s writing for my audience, and I said back to it, “That’s what I do, going deep in the legal weeds, that’s me.” And it said—I took a screenshot of it, it was so great—”I know you go deep into the legal weeds all the time, but this time when you do it, it hides how fucking crazy this story really is.” And I was like, “Wow, okay, it’s got an attitude now.”
Shirky: But the counter to that story, ChatGPT 5 just launched and was much less sycophantic, and users revolted so badly that within 72 hours they said, “We’re bringing it back”—because so many people lost their friends, their flattering mirror. Like, people are already committed to glazing. You can’t put yourself forward as the average user.
Masnick: That is fair.
Puja, I want to bring you into the conversation. When we were discussing what we wanted to talk about, I thought you had a really interesting provocation, in part pushing back on some of the stuff that I’ve been talking about. I focus a lot on user choice and agency and things like that, and you suggested that that is insufficient and won’t be enough, and actually is subject to capture and could lead to bad results if you over-index on that. So I wanted to give you a chance to explain your position, where you think we need to be thinking about these things.
Puja Ohlhaver: Well, I’m all for optimizing agency—I think the word you said was “control”—and there’s a parallel experiment going on with AI right now, and that’s cryptocurrencies and they’re converging. If you look at what happened with cryptocurrency protocols, it was all about the self-sovereign individual and user control over your wallet, and all these systems were just built about protecting the individual’s control over their wallet. Now how has that experiment actually played out? We have a whole economy living on Twitter, on an attention auction, rotating between shitcoins and memecoins in a zero-sum game, and that actually hasn’t enhanced productive capacity. The innovation, let’s be clear, has actually really just helped scammers and fascists get rich and poor people get poorer. So the emphasis on user control … control is one side of the equation of power—my theory of power is there’s also information. And we have to acknowledge the information environments, especially when we are all engaged essentially in a tacit attention auction on social media.
“The Enlightenment conception of human beings as these rational, optimizing truth-seekers starts to take body-blow after body-blow in the second half of the 20th century as you get psychology, sociology, economics, constantly demonstrating that we are emotional, satisficing, motivated reasoners. We are, as a species, given to believing things that affirm our viewpoints more than we believe things that contradict our viewpoints, without regard to to factual accuracy. … The fateful choice with AI—the current generation of generative AI tools since roughly end of 2022—was to do the last 10% tweak on the language model, something called reinforcement learning through human feedback. We show people: ‘What do you like better?’ ‘Do you like this one better or do you like that one better?’ over and over and over at incredible scale to gradually tune the tools to give answers that people like. And if we were rational, optimizing truth-seekers, that would have led these tools towards more factual accuracy, and more saying, ‘I don’t know the answer’ or ‘there are competing answers.’ But because we’re not those people, we get glazing and sycophancy instead.” — Clay Shirky
We’re entering these balkanized epistemic environments, so bridging to these earlier comments from Clay and Saffron on groups and the importance of coalitions, representing the coalitions and groups and communities to which we belong, is actually key for democratic governments. I’m going to simplify a little bit about theory of democratic governance, but one way to look at it is that democratic governance is there to provision shared common, collective, and public goods, and markets are there to provision private goods. When our Constitution was formed we had this theory of geographic representation where our information was generally geographically localized and our representatives could go to Washington and duke it out and find common shared public goods through consensus.
Now, because we are also online in essentially what are tacit attention auctions and are balkanized, we need to find ways to represent our participation in different balkanized epistemic environments to find common goods and public goods across these divides. Clay was highlighting some experiments that Saffron has worked on, but there’s also another set of experiments which Audrey Tang has been very pivotal in. One is Talk to the City, in which participants and parties—for example, in Tokyo in the gubernatorial election recently—joined an online WhatsApp group and gave their input, and AI was used to facilitate bridging statements, and then there were clusters of opinions and candidates were able to get a pulse on where are their consensus points and where are the cleavages. If you want to highlight conflict, you can; if you want to highlight consensus and find common shared goods, you can.
Engaged California is a recent initiative, after [the state’s] wildfires, that’s building on the same principles as Talk to the City to garner public input and find low-hanging-fruit common ground. I think this is really important. Earlier I was talking to Francis Fukuyama, and there seems to be this general sense that there is this compromise between participation and speed. And, actually, we can leverage these technology tools to have both at the same time, and make democratic participation have the speed and “funness” of markets, but also legitimacy and participation of democratic governments.
Masnick: All of these discussions keep coming back to some inherent tensions. We’re talking about things around pluralism and consensus and agency, but also finding common ground among groups. I’m curious: Across the board, for all three of you, how do we balance those things? Because the thing that I struggle with is that a lot of this is really nice and interesting in theory, and then when you have bad actors come in and say, “Wait, we can exploit that to our own advantage,” then suddenly they can use the same language of pluralism and consensus but to very anti-democratic ends. How do we think about those things beyond just saying these are good platitudes to think about and good things to go towards? How do we actually implement them in a way that is actually good.
Huang: Can I just clarify, when you say “bad actors coming in and exploiting these things,” what are you thinking of?
Masnick: I’m thinking of a lot of different things, because it could be used in different ways. So in some sense you can think of it just as individuals or groups using the technology in ways to distort a conversation or a discussion; or you could think about it in terms of those who have control over those platforms trying to achieve their own interests. In the social media context—it’s a little different in the AI context—the example is Elon Musk taking Twitter and turning [it] into X for his own political interests. When we’re talking about these things in terms of how you set up AI tools to do this, is there a risk that those in charge of it start twisting it to their own interests as opposed to the interests of the public at large? Or have they already done so?
Shirky: Yes, that risk is permanent, there’s no political system that has a sort of set-it-and-forget-it feature of representing the broadest polity, but to pick up on something said earlier about groups, a lot of these conversations start about technology and end up saying the United States needs a parliamentary system. There’s this aggregation issue—there is no coherent policy that tells you both what our position should be on chip exports to China and legalization of marijuana. There isn’t a way that those things automatically align, so you end up needing to aggregate opinions into policy preferences that include the inevitable trade-offs.
In parliamentary systems, the disappointment of the trade-offs comes after people are elected; in the United States it comes before, because you’re forced to pick a party that you know in advance won’t represent you. Whereas in the parliamentary system you go, “I’m going to vote for the Greens and the Greens are going to make the government much more environmentally friendly” and then the Greens have to go into coalition with other people. The trade-off [there comes] after you’ve done your bit, so you feel a little bit better than you do in the U.S. context.
But what’s really interesting now is that the groups can assemble around issues and and it can be much more dynamic. The thing you said, about there not being a big trade-off between participation and speed anymore, that’s one of the legacies of social media. One thing it could do is to disaggregate policy preferences and let people know, and let people let each other know, in closer to real time, where they’re going. It’s been clear that legalization of marijuana has been a winning issue in the United States for decades, but because the two parties were stuck it was very difficult to move that particular issue. But if you start to un-bundle the issues a little bit, because you can aggregate groups of humans fast with larger scale, you might be able to rethink what it takes to aggregate policy preferences in ways that the government will respond to.
Ohlhaver: I’ll give a [bit of a] different take on this, on the bad actors. Policing the boundary between cooperation and collusion is a very tricky question, and my perspective of who’s a bad actor is kind of, “Oh, are they aligned with me?” And you look at these different balkanized epistemic environments and social media platforms, whether it’s Musk or Zuckerberg, and what you’re seeing is actually limitations to growth because they have so much influence. One way for these platforms to actually overcome those limitations of growth, and lean into what has been driving economies of scale, scope, and network effects, is to actually incorporate the inputs of participants—I don’t like saying the word “user”; I prefer “participant”—to these systems.
And not just as individual users, but again, clusters; enabling the surfacing of opinions through things like Polis or Talk to the City. You’re enabling the surfacing of clusters of groups to enable a prioritization of not just, say, product features that serve common goods, but also how to regulate these as social media entities and even introduce nested forms of regulation within communities. That would actually expand the scope and reach of these platforms while diminishing the influence of the singular CEO. You actually approach what I call this paradox of decentralized monopolies.
Masnick: Just to respond to that, I think you’re right. My question, then, to push you a little further, is: How do you make that a reality? Polis has been out for a decade; vTaiwan, which it was based on, that’s almost a decade at this point. But we haven’t seen those tools used as widely. There are more examples, and you mentioned a few of them earlier; but it’s one thing to see these tools in action and then actually getting people to embrace them when the incentive for most people in power right now is not to. So how do you get past that?
Ohlhaver: If I’m the CEO of Facebook, I’m facing a trade-off between my influence and my actual reach of the platform …
Masnick: Is that true? Does Mark Zuckerberg think that right now?
Ohlhaver: Well, [he] should, because there is this balkanization: people are on Truth Social, on X, and that actually does limit the conversation. I’m on Substack and not here, and the ability for cross-collaboration, ideas, generation … if you want to actually expand the reach, then expanding the class of influence is necessary. Otherwise you just hit these ceilings.
Masnick: Saffron, do you have any thoughts?
Huang: Yeah, a couple of thoughts. You’re saying there’s a trade-off between individual agency and these sort of social communities trying to do things in a more pluralistic, community-based way, and pointing to X as an example of that. I see X more as there’s just really, really rich people now who can buy whole social media platforms and lose money on them and it’s fine. And they also care more about influence and attention than just accumulating money now. That’s also part of why I wrote that piece on sharing AI’s future wealth, because I do feel like so many things come back to who has the money to do things and who doesn’t, and how is money spread out over society.
On the individual agency versus social trade-off, I don’t think there’s as much of a trade-off, because even these ideas of making issues more granular or being able to participate in social media or on group chats or whatever in a more productive way … people are already doing that. People want to be social, it is part of their agency to want to be social and to do that productively. So making people feel like that is more effective is part of augmenting individual agency. So I feel like there’s not much of a tension.
I think the real tension is your question of how do you even make these things powerful, because people don’t want to participate in just another discussion of stuff, and everybody’s really short on time, so if you don’t link the public input process or the discussion to some concrete legible output, then why are you asking for people’s time on this? So I think the question of connecting it to power is the important one, and starting with democratic government, which has a mandate to represent the will of the people, is a good place to start. I think Engaged California is really cool; actually, a lot of countries around the world have adapted Polis or the sort of vTaiwan software package, basically, to their own languages. In Brazil, there’s a Portuguese equivalent of it, which is very cool. I don’t know how you speed it up, and especially not here, but I’m excited to see what happens with Engaged California because it’s one of the biggest states and a good place to begin.
Masnick: I was just going to make sure, are people in the audience aware of Polis and vTaiwan? They’re very interesting tools and you should know about them if you’re interested in this. Who wants to give the quick summary?
Ohlhaver: Sure, so with Polis, say you have a group. You can seed certain statements to get the conversation started—people respond up or down. And then they can add statements, and people start to respond up or down to those statements, and those statements that get more engagement might rise to the top. Eventually, what you start to get are clusters of groups, based off of individuals, and then based off of those clusters of groups you can actually leverage AI to form bridging statements that might help find common ground between them. That becomes your learning opportunity.
Something like Talk to the City does the same thing, and takes the same information, but instead of, say, clustering individuals into tribes or groups, it clusters opinions. That surfaces common ground faster, so there are different variations but essentially it’s relying on participant input and feedback to iteratively find what are the common-ground and tension points. It’s important because a lot of people are informationally correlated, so something like “one person one vote” doesn’t necessarily capture this nuance and variation.
Masnick: Yeah, and just really quickly, Polis is open-source software and it was actually—though it’s gotten very distant from that—the original basis of what became Community Notes on X. So if people are familiar with Community Notes, and how it tries to hit a point of consensus, that’s a very simple illustration of what Polis can do.
Huang: To add to that, it’s really interesting because they don’t try and surface opinion groups from looking at your demographics, where you’re from, or anything like that. When people join a Polis conversation [they] can vote on all the previous statements or any subset of them, and also add [their] own. It basically tries to cluster people into opinion groups, so it surfaces new coalitions or new axes of alignment that you wouldn’t necessarily [get] if you were just going to say, “OK, what age are you, what race are you?” It’s much more about what you believe.
“Democratic governance is there to provision shared common, collective, and public goods, and markets are there to provision private goods. When our Constitution was formed we had this theory of geographic representation where our information was generally geographically localized and our representatives could go to Washington and duke it out and find common shared public goods through consensus. … Now, because we are also online in essentially what are tacit attention auctions … [In] one experiment, Talk to the City, participants and parties—for example, in Tokyo in the gubernatorial election recently—joined an online WhatsApp group and gave their input, and AI was used to facilitate bridging statements, and then there were clusters of opinions and candidates were able to get a pulse on where are their consensus points and where are the cleavages. If you want to highlight conflict, you can; if you want to highlight consensus and find common shared goods, you can.” — Puja Ohlhaver
And then the bridging statements come up from that: Here are the sort of bottom-up, natural opinion groups for this particular issue; just speaking to the issue-specific coalition, here are the statements that are popular across groups. So there might be one group that really, really hates this issue, but they’re a minority so there’s a tyranny of the majority thing where it directly affects them but they’re such a small group that everybody runs roughshod over them. The way that Polis guards against this, across the natural opinion groups for the specific issue, [is it asks,] “What are the statements that have relatively high average agreement across all of them?” to mitigate this effect. The way that they calculate these things is really interesting in terms of what kinds of agreements they’re able to surface.
I started a nonprofit a few years ago called The Collective Intelligence Project and we we work on a bunch of AI and democracy projects, and we use Polis for a lot of them and have worked with Audrey. So for an example of how we use this in terms of connecting it to power, we would find a tech company or an organization making decisions about AI that might be influential … for example, we went to OpenAI and we were like, “How are you deciding how to evaluate your models? What are the evaluations that you’re running on them? And do you have any public input on them?” [They essentially said,] “No.” So we ran a process to get more public input into that, because what engineers think might be important risks to evaluate for are not necessarily what more groups of people would think are important risks to evaluate for.
Similarly with Anthropic. [We] did one where, through Polis, we had a random sample of a thousand Americans choose the rules of behavior for an AI model, and we ran the public input process and Anthropic trained the model and compared it against the baseline. It was as performant but less biased along a bunch of social bias dimensions—like age bias, disability bias, gender bias, all of these things—than the baseline model where the engineers had picked the rules.
So the way that we approach using Polis is: pick a stakeholder that has an important decision to make that people would care about, and convince them to use Polis to impact the system, or use some kind of tooling like this. These are sort of one-off, piece-by-piece things; it’s not like an ongoing infrastructure. And I would like to see more of that.
Ohlhaver: The two early success cases in Taiwan I want to highlight here—since this is a policy-oriented group—[were] actually around two very controversial issues: same-sex marriage and Uber regulations. The constitutional court in Taiwan ruled that same-sex marriage was legal, [and] there [was] a huge backlash against that and a public referendum against it. So what Audrey did in Taiwan was run a Polis and find the coalitions, the divides, the common points. What ended up happening was, as a parallel process, there came these compromise points where, rather than redefine marriage, which was controversial to traditionalists and conservatives, [the approach was to] confer the rights that were important to gay couples—which is what they wanted—and create a special statutory exemption without calling it “marriage.” That’s what ended up being the compromise and becoming its own statutory language. So they achieved the objectives with compromise.
The same thing happened with Uber regulation on how Uber’s going to come in and they don’t want to be regulated like a taxi company. They’re claiming they’re a platform. Polis was used to help find a compromise where they had to negotiate with taxi fleets, and that actually ended up being a form of collective bargaining for taxi drivers who wouldn’t otherwise have leverage on their own to negotiate better profit sharing through their taxi fleets and certain other forms of safety regulation. That was a way to quickly deal with new cultural and technological changes with legitimacy.
So I think as we grapple with cost disease and housing, health care, and education, these—as just parallel processes—can help policymakers and lawmakers find common ground very quickly, [including] bridging language that doesn’t trigger groups and help us move forward.
Shirky: I’ll just add to the bridging language thing. As a sort of natural experiment, a bunch of the American letter-writing sites—where groups get together and write to their congresspeople—that form those algorithms were almost always tuned to surface areas of maximum agreement and instantly sink anything where there was significant disagreement. So if you would suggest something that would create a debate for this group, that would sink from the homepage within minutes because the incentive of those platforms was to maximize good feeling but not the kind of bridging language that Polis and vTaiwan have.
Politics is hard work, and you can shortcut it with emotions; when platforms are optimizing for growth they very often want to create the emotions that create cohesion even if there are discussions that those groups ought to be having.
Masnick: Correct me if I’m wrong, but if I remember correctly with vTaiwan and those two issues in particular, I think there was some sort of free agreement that parliament would abide by whatever was the conclusion that came out of that system?
Ohlhaver: I’m not sure, actually. I don’t think so.
Masnick: I remember something to that effect, so maybe I’m wrong on the specifics, but that made me think about … again, there are a few different elements here. One is the technology and how that advances; and one is both the government acceptance of it and/or regulatory positioning; then there’s the question of the social movements as well. So in the examples that we’ve used with Taiwan you saw pieces of all three of those things: you had the social movement that spoke up about these things; they had the technology in place, they could do stuff; then you had a regulatory situation in which—whether they committed to it or not—they then use that tool to actually do something with it.
“I think we really need to be thinking about where tech and democracy connect to each other in important ways. You can’t think about those things as separate—technology and democracy are deeply, deeply intertwined. And that is true of AI technology as well. … Audrey Tang, the former digital minister of Taiwan who’s always very thoughtful on this stuff, made a comment years ago at a conference saying that the internet and democracy are not two things, they are one and the same. … Used properly, it is a very empowering tool for democracy; but used poorly, or used under the control or pursuits of billionaires, companies, or governments, it could be very problematic. Figuring out that balance is really important.” — Mike Masnick
Do you need all three of those pieces in alignment to make this work, or is there one of those areas or two of those areas that you can invest in that sort of drags the other ones along to make those things effective?
Ohlhaver: I don’t know. But, fundamentally, we’re talking about communication technology. You brought up Community Notes as an example. Maybe this can expand our conversation to how we think about the political economy of AI and in particular centralized surveillance AI. To the extent that you have communities and groups—and faith and family groups are really important; we haven’t talked about them at all and they’re probably critical in this fight we are in today—when you’re able to give groups some sense of autonomy and even leverage AI agents that represent their principal interests in an incentive-compatible way, what you can do is enable communities to figure out how to bridge effectively, and communicate their shared interests, and message with other groups for greater cooperation, at a time when language in this attention auction we are in is being weaponized and balkanized.
Part of the problem, the collective action problem and common-knowledge problem we face today, is just figuring out how to communicate across many different groups—faith groups, family groups—when there is a shared common ground, and finding those right words. How we use language, process language, and communicate with each other is fundamental to the political process, and restoring that contextual integrity alone is very important for democratic integrity and legitimacy.
Huang: I wanted to actually slightly change the subject/bring up a couple more things. The title of this panel is “AI and liberal democracy,” [and although] I feel like we’re going to see AI as a communication technology, there’s a lot more to both AI and liberal democracy than that.
AI is an information technology, and democracy has a lot of information-processing and decision-making; democracy is kind of a collective information technology. AI is also an information technology, so changing how information is processed, and who can understand what, can have a really big impact. Attention is very scarce, and one of the things that I’m excited about in terms of AI and civic engagement is … I think that people just don’t have the time to engage with politics really. People read the news and learn about high-level national politics, but even then I think people don’t necessarily make particularly informed votes [at] the national level, and then a lot of things are actually determined at the local and state level.
Last year, when I had just joined Anthropic [in the lead-up] to the November elections, I did this experiment internally where I had Claude help people understand their ballots. Because, in California, there’s like 20 things on the ballot and they’re all very niche issues, and people are like … there’s a voter guide from “The League of Pissed-off Voters” and people go to that; people will just [say], “I’m gonna go to ‘The League of Pissed-off Voters’ and just see how they want me to vote and then I’ll vote that way.” And I think that’s basically—what do you call it, “liquid democracy”?—where you’re just delegating your vote, which is not ideal. So I basically got two groups of people internally—I got people internally to volunteer for this experiment—where I had half of them use Claude. I put all of the documents relevant to this particular ballot, all the legal documents and everything, and had them interact with Claude to understand what was going on; and then the other group, I just gave them the list of documents and I was like, “Here you go, and you can also use Google.” And the people who used Claude to understand the ballots … I asked people beforehand also what were their main issues on these ballot measures. And Claude seemed to be quite good at mitigating these issues and [helping] people get to a better understanding of the ballots much faster with less time.
I had a survey to quiz them on their understanding of it afterwards. And what was interesting to me is that they had a much better core understanding of the logic of the ballot, like what this new piece of policy is actually going to do. But they had less understanding of the contextual details than people who used Google or just read the documents directly, because when you read things directly or via Google the answer is not directly given to you. You’re picking up all of these contextual things as you’re looking for your answer, so people learn slightly different things. I think it’s really interesting and maybe a way to … maybe you can have AI agents that run around and pick up all the information for you, but I think it’s potentially a way to try and get at this attention issue that we have with civic engagement, and need for expertise to decide on things.
Two more things I’ll quickly say: the first [is] AI for civic engagement; the second [is] transparency and accountability. So you could have AIs crawling the internet and trying to check for like discrepancies in what the government says it’s doing versus what it’s actually doing—something like that. I think we pay a lot of attention to the political process, but as I alluded to earlier I think a lot of what affects people is state capacity and their local government and state government. And if they don’t feel like those things are working for them then they’re probably going to be very disengaged at those levels and above. So I do think that using AI in government to make the paperwork of government less painful, to enable very cash-strapped departments to do more with less, and to deliver better services, is actually quite important for people’s general belief that the system works for them and they can live good lives.
Shirky: On the AI for civic engagement thing, there’s a really interesting moment now where—right now it’s just talk—the Trump administration is complaining about “woke” answers from AI and wants to intervene in the models with a low rank adaptation or retrieval augmented generation, any of the menagerie of tools for affecting the output after it goes through the model. It’s roughly analogous to the situation Baidu—China’s Google—finds itself in. It’s [drawing from] the whole internet so it has everything in there, and then it has to grab references to Tiananmen on the way out and suppress it. And that’s roughly what the Trump administration is proposing now.
For Americans of a certain age, there was a moment where The Guardian suddenly became an absolutely essential news source in the immediate aftermath of 9/11, because the national press became a nationalistic press and there was not great information about the worldwide movement that was targeting the United States. The Guardian were just better reporters on it because it wasn’t in their backyard, and [its] readership explodes after 9/11. We can start to see, particularly in models in France, the UAE, and China, becoming better tools for civic engagement than models in the United States if the current government intervenes in major commercial models.
Masnick: So we get our own Great Firewall.
Shirky: It’s weird—it’s like inside-out, where the stuff we’re threatened by is the stuff inside the country. I don’t exactly know how to describe it, but it’s basically … there’s no such thing as unbiased AI, but if you want to avoid the particular anti-woke bias of the current administration you end up going to Mistral or you end up going to Qwen or DeepSeek or whatever G42 is building in the UAE. There are going to be a bunch of these sort of nationally-supported foundation models, which ironically might be better tuned to American civic norms than if the American-hosted companies are forced to suppress certain kinds of answers and elevate other kinds of answers. Musk seems to be doing this with Grok on a personal level, but the Trump administration has proposed that substantially all large AI models be prevented from offering woke answers, because they can define “woke” to mean anything they like. It can be any amount of race, class, gender, sexual orientation representation bias that they want to build in. So we could ironically be accelerating the globalization of AI just by making Americans not able to do that round-trip inside our own country.
Masnick: I think there’s a bunch of different directions you could go with that. But one of the things that I’ve been wondering about a lot gets back to the question of who controls that particular tool and and whether or not you have agency over those tools. There are all these discussions about open-source models or small models that you could run locally, or things that maybe help get around that. But one of the things that I’ve noticed lately is also that I tend to play around with multiple models and often will run the same query on multiple models and get very different answers. I find that to be really powerful, and I actually like the fact that I use a tool that lets me just plug in whichever model I want and then sort of play around and see what answers come back. But I don’t think most people do that. Once again, I’m weird. I keep asking the people who make the tool that I use to build in a panel feature: I want to be able to ask it a question and have multiple AIs argue over the answer for me. But I’m wondering, Are there other things like that that will help get people out of being in a particular bubble or a particular zone, that then becomes a target for control?
Ohlhaver: I have a different—maybe it’s a controversial—view of what political economy AI should have. I don’t think a future of oligopoly of frontier models is really desirable. I see the future where we have as many AIs as the diversity and multiplicity of our relationships, and agents representing those coalitions and groups—whether those be companies or faith groups.
Masnick: How do you get there? I agree—I think that’s a great thing. But how do you get there?
Ohlhaver: Well, there’s an incentive-compatibility issue to deal with to make sure these agents are actually responsive to their principles, and we can talk a little bit about that. It’s really the price of attention and cost of influence, which is the title of my last paper, and how do you influence agents and how agents act on your behalf. I think there’s a fundamental tension between money and voting. You can think about money as buying attention and voting as influence, and if you take my theory of power, which is power as information and control, it [forces] a bifurcation and a trade-off between whether you want more information or you want more control. Adopting both aspects, the things that we like about money and the things we like about voting, and discarding their weaknesses, is a kind of mechanism—probably one of many that we can experiment with—to have incentive-compatible agents. But having a broader political economy of many AIs inter-operating, negotiating, so there is no one winner-take-all … I think we can do better than oligopoly, and we should do better than oligopoly, because that’s unstable and also maybe trends towards geographical oligopoly which we don’t want, from Panama to Greenland.
So how do we get there?
I think there are experiments that we can do. One of the big mistakes I think that we’re making in the political economy of AI is how we treat data. These frontier models rely on data and compute, and—no surprise—there’s a race to hoover up as much data controlled by the government, and leverage sovereign debt markets and integrate with the government, and that’s the accelerationist race we face right now. But of course that’s the political economy of, like, a Venezuelan oil economy where participants do not get rewarded for their data contributions and instead have to end up in a situation where we get doled out a UBI. That’s undesirable.
So creating actually two-sided marketplaces that recognize data as something that is socially generated in groups, in conversations between people, enables them to govern the uses and abuses of that data and at the same time trade off of the value of that as a coalition—whether it be a Hollywood actor’s guild or whether it be a music guild or any kind of coalition. Anthropic cannot hoover up our data inputs but actually has to pay for them and negotiate their values and not rely on authoritarian funding sources from Saudi Arabia to subsidize that. Changing that political economy, and looking at data not as something that is a public good and not as something that is a private good but [as] something in between, and modeling it as the economic good that it is as an input, can actually achieve the best of both markets and democracy and encourage this plurality.
Shirky: I want to be a little more—maybe much more—pessimistic than Saffron about the civic engagement stuff. Because I think civic engagement runs into the same problem as the democracy problem as a whole, and also runs into the “people don’t want two answers from two different AIs” problem. Everybody who went to graduate school thinks of graduate school mentality as being a kind of low-energy state, that people are constantly seeking new information, they’re constantly willing to play off conflicting goals. Nobody wants that. And I say this as somebody who at NYU has to fight the fact that ChatGPT, less than a thousand days old, is now treated as an authority figure among students. If I could just get students to use two tools we would be further ahead than where we are.
But in fact the consensus around ChatGPT is not just an accident of its first launch in this particular interaction paradigm—it’s the fact that, as also happened to Google, people want there to be a consensus-generator that’s not them. And I am afraid, to your experiment, if we put people together we have to think through these issues: that they gain information and clarity and thoughtfulness in one direction or other, whether they’re using AI or they’re using Google, but there’s a sort of humbling effect about that experiment. She told the people, “We’re paying attention to you, but we want to help you think through these.” The attentional difficulties they were [having]—you know, “I’m busy with work, my kids are stressing me out, whatever”—that is the normal case.
This gets back to your original question. I don’t think we can posit a natural human force that leads us towards engaged pluralism. I think every place that’s put in place, it’s put in place because people who are thinking long-term like the downstream effects of pluralism more than they like the alternatives. But it’s this weird thing: a lot of technology starts with a kind of undemocratic movement by a small group to say, “We’re going to do things differently,” and then you try to create positive norms. So on supply side and demand side with AI right now, we’re really challenged because neither the federal government nor the major U.S. AI firms have much incentive to push us in the direction of pluralism.
Audience Question: This whole time I’m just thinking about who wrote Project 2025, and how it seemed kind of old-school Heritage Foundation. As far as I understand, there was no AI use of, “What did MAGA think about the Department of Transportation?” This is a small group that wrote Project 2025. My point being: I really like this idea of a vision of liberalism using AI to democratize communities and I think these models … we shouldn’t undersell them. I think we should expand Engaged California—there should be Engaged Texas right now with the redistricting issue. I just want your comment about that, what’s the long-term plan in terms of what we’ve all talked about? How do you center AI into that bigger agenda?
Shirky: I’ll just say that I would be surprised if people on Project 2025 did not use AI. Right now it a tool that’s drawn into … people hate writing from a blank page. Even if they just started with something they didn’t like and edit their way out of it, very often AI is in the background. So, I wouldn’t be comfortable saying we’re sure that this was produced without it. But I will say that as we move towards things—Google’s Notebook LLM is a particularly good example of document-centric rather than conversation-centric tools—we can start to say this group has some principles they care about, or this group has some goals we’re trying to articulate.
“AI is an information technology, and democracy has a lot of information-processing and decision-making; democracy is kind of a collective information technology. AI is also an information technology, so changing how information is processed, and who can understand what, can have a really big impact. … I think we pay a lot of attention to the political process, but … a lot of what affects people is state capacity and their local government and state government. And if they don’t feel like those things are working for them, they’re probably going to be very disengaged at those levels and above. So I do think that using AI in government to make the paperwork of government less painful, to enable very cash-strapped departments to do more with less, and to deliver better services, is actually quite important for people’s general belief that the system works for them and they can live good lives.” — Saffron Huang
I think at least one way to go in the direction you’re pushing to is to have civic groups who are using these tools talk about the use of the tools separate from their goals. Which documents did you put together? How did you share the output? Google is constantly hearing from us—NYU happens to be a Google shop. They’re constantly hearing from us: We want more sharing; we want the ability to export things; we want the ability to say, “Here’s a document store for a whole department not just for an individual professor,” etc., etc. And Google, as is always the case, shipped for individuals and now they’re backfilling on group orientation. But where groups use AI, if they could share practices around sharing documents and having that help them coordinate, including coordinating conversations about the disagreements, that would be a civic layer we could build right now without waiting for new capabilities in the tools.
Audience Question: So, in my conception of humans we are both smart and lazy, and I mean that in the least offensive way possible. We don’t want to exert too much energy. I’m wondering: How can we optimize our use of AI so that it makes us smarter but not lazier, as I think social media has made us intellectually lazy with one other in many ways that are very negative?
Shirky: One of our mottos is: AI to connect people, not to replace people. The problem we’ve got with students is even if you give them good uses, you’re not taking away the potentially bad ones. So, we’re actually in the middle of a kind of medieval turn towards blue books and oral exams and the more directed … like, if I want to know what you know, I’m not going to look at your paper anymore, because “writing,” the noun, has been decoupled from “writing,” the verb. So I have to talk to you and find out what’s in your head. But I think that kind of engagement is probably going to be true … you know, the maximum case of this is humans huddling together for warmth while most content—no, seriously, most content in the world—will be produced by machines, and we will only know we’re talking to each other if we have a shibboleth of some sort. But I do think that that kind of human-to-human connection is where assessment is going, because production has been decoupled from thinking.
Ohlhaver: Clay, do the students know how to use a pen?
Shirky: No! This is the group that didn’t learn cursive and is now like, “This is a blue book, you can write in it!” It’s really a problem. We need typewriters … yes, all of this AI has been engineered by Big Typewriter.
Audience Question: Basically, industry standards are not cutting it for ensuring pro-social behavior from AI companies. It seems like regulation is necessary. Anthropic is basically the only company that I’m fine calling relatively ethical in the AI space. But the stance on the California AI bill is pretty weak and congressional testimony has been like, “We don’t want to be regulated.” What is Anthropic doing to try to ensure good AI regulation?
Huang: So there a whole policy team that works on engaging with regulators and talking to them and all of these things, and I’m not as involved with that on-the-ground work. But I think there is a lot of advocacy for regulation and against things like, “Let’s pause regulation for 10 years.” I’m not an expert on the actual talking with people and how that turns out. I’m on the societal impacts team and we basically are just trying to do as much empirical research as possible on how AI is impacting people in various ways, and publishing that into the world to inform regulation or help people make informed decisions. So, publishing stuff on emotional impacts—Are people using Claude as therapists or falling in love with it or whatever? That’s out there, it’s on the blog. There’s stuff on education. And yes, there’s a percentage of students using it for cheating and we give that and what that looks like, and the economic index which is doing longitudinal data collection—privacy preserving data collection—on how are people using AI for economically relevant tasks and in what industries.
The work that I do, and my team, is just one of the good ways to try and be pro-social, just putting information out there. Also because there’s such a huge debate about AI and there are people on all sides of it—it’s great, it’s terrible, it’s whatever—one of the more useful things we can do as researchers and data scientists is just try and make that more concrete instead of entirely speculative. Also doing things that are … the collective constitutional AI thing that I was talking about—How do we get more democratic input into these decisions? All this work takes time.
Audience Question: I’m looking at a very deep inquiry of very narrow subject matter [using ChatGPT] so now it goes back for months so it really knows me. I can tell that it’s not only learning … it’s learning me. So the two things I’m wondering about. One is: Is this going to pull us deeper into these kind of news bubbles and pockets? Social media is bad enough, but is this pulling me in deeper? And the next thing I’m wondering is: When is it going to close the … I mean it’s not smarter than me, but it’s a lot faster, and it can go a lot wider …
Masnick: Do you know when to tell it no? This something I’ve been wondering: If people are using it, how often do they push back?
Audience: … when it’s bullshitting and when it’s telling me what i want to get, I think it’s really dangerous because it’s very seductive to be told, “You’re so smart and good looking.”
Shirky: The times you think it’s bullshitting, it’s bullshitting; but there’s also times it’s bullshitting that you don’t detect. It’s the false positives that are the risk there. We are already in a world where there are a number of well-documented cases of people having psychotic breaks by being in conversation with these tools, and this is the hardest thing to think through. If somebody is startled by the reflection in the mirror and punches the mirror, you’d know what to do about their bloody fingers. You would say “that was really dumb.” But when these tools are designed to present as if they’re personalities, it’s very difficult to say what the agency is. For somebody who has psychotic break talking to the tool, you could say, “You just punched a mirror. Why did you do that?” But in fact all this language like glazing—glazing was originally this way to describe the output of the text—we’re the ones being glazed, we’re the donuts. And this thing applies that stuff to us, and because the metaphor of, “You’re acting with another intelligence” is so powerful.
The thing I worry about like is less ‘do you know when it’s wrong’ than ‘do you remember that it’s not thinking.’ That’s the hard thing to remember, because the interaction pattern is close enough. We’ve never had anything that felt like thinking when you’re interacting [with it.]
Audience Question: I used to direct the Cyber Policy Center at Stanford and I now lead a democracy program. My question is about power, because I want to come back to the point that you made. I love all of these ideas of civic tools and ways for engagement. I spend a lot of time in the deliberative democracy space. The problem was always: these are just technical alternatives or augmentations to that. Government wouldn’t adopt it no matter what we did, and we did some really specific and interesting things, I think. So, ultimately, to use any of these for good requires some public power. When you mentioned that you’re not a fan of windfall or UBI—and I see the challenges with that—I’m curious what you think about data dividends or universal basic capital as an alternative, or some way to ensure that all these benefits can be democratically realized by first ensuring that there’s some economic and political power still distributed in society.
Huang: The Noema piece that we were referring to at the start of this discussion is basically a UBC argument, I would say, but universal basic capital in a way that is geared towards being able to take advantage of AI and the economic value that it can create. I think all of the stuff around data dividends and all of these things are cool—I just feel like they’re not as pragmatic. Or, it’s been hard to get them to work, from all the people I’ve talked to and the things that I’ve seen. I think there’s maybe a way that people can think of creating this in the right shape to really have it take off.
Ohlhaver: There is a way. I don’t understand why we’re agreeing to this Venezuelan oil thing where we just hoover up data and don’t pay for it. I don’t understand. Your question is about power, right? So, my house is high-context: You walk in my house, you see what’s hanging, you see the food, you can learn a lot about me. You go into my neighborhood, you know a little less about me. But I have strict access controls on my house; I have less access controls on my neighborhood. We have proximity—why don’t we represent those privacy expectations also in our conversations? AI agents that represent and are participating in those conversations and communicating with other communities and negotiating values, why can’t we open a new political economy around inputs into these models and reward contributions, and that index based off of your access to information control?
© The UnPopulist, 2026
Follow us on Bluesky, Threads, YouTube, TikTok, Facebook, Instagram, and X.
We welcome your reactions and replies. Please adhere to our comments policy.











