Ethan Zuckerman https://ethanzuckerman.com/ Ethan Zuckerman’s online home, since 2003 Fri, 05 Dec 2025 23:50:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony https://ethanzuckerman.com/2025/12/05/gramscis-nightmare-ai-platform-power-and-the-automation-of-cultural-hegemony/ https://ethanzuckerman.com/2025/12/05/gramscis-nightmare-ai-platform-power-and-the-automation-of-cultural-hegemony/#comments Fri, 05 Dec 2025 23:50:14 +0000 https://ethanzuckerman.com/?p=6814 Ny friend Joachim Wiewiura invited me to give the inaugural talk at the Center for the Philosophy of AI (CPAI) at the University of Copenhagen this past Tuesday. It felt like a good opportunity to go out on a limb and explore a line of thinking I’ve been exploring this past semester with one of… Read More »Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony

The post Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony appeared first on Ethan Zuckerman.

]]>
Ny friend Joachim Wiewiura invited me to give the inaugural talk at the Center for the Philosophy of AI (CPAI) at the University of Copenhagen this past Tuesday. It felt like a good opportunity to go out on a limb and explore a line of thinking I’ve been exploring this past semester with one of my undergrads, Vik Jaisingh. Vik is using Benedict Anderson’s Imagined Communities and Gramsci’s Prison Notebooks to examine the Modi administration’s crackdown on alternative and independent cultural producers on the Indian internet, and so I’ve been reviewing Gramsci’s work on the construction of hegemonic culture.

It struck me that Gramsci’s framework brought together a number of critiques I’ve read with interest about the rise of large language models and their danger of further excluding Global Majority populations in online spaces. Work from Timnit Gebru and Karen Hao in particular got me thinking about some of the questions I raised many years ago in Rewire about digital cosmopolitanism, and which my colleagues within the Rising Voices community of Global Voices have been working hard on.

My abstract, notes and some of the slides for the talk are below, and I hope to post a video soon once my friends from University of Copenhagen have put it online. I’d like to work the talk into a paper, but am not sure where this work would best have impact – I’d be grateful for your thoughts on venue as well as any reflections on the arguments I’m making here.


Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony. Center for the Philosophy of AI (CPAI) at the University of Copenhagen, November 19, 2025

Abstract:
Large language models – the technology behind chatbots like ChatGPT – work by ingesting a civilization’s worth of texts and calculating the relationships between these words. Within these relationships is a great deal of knowledge about the world, which allows LLMs to generate text that is frequently accurate, helpful and useful. Also embedded in those word relationships are countless biases and presumptions associated with the civilization that produced them. In the case of LLMs, the producers of these texts are disproportionately contributors to the early 21st century open internet, particularly Wikipedians, bloggers and other online writers, whose values and worldviews are now deeply embedded in opaque piles of linear algebra.

Political philosopher Antonio Gramsci believed that overcoming unfair economic and political systems required not just physical struggle (war of maneuver) but the longer work of transforming culture and the institutions that shape it (war of position.) But the rising power of LLMs and the platform companies behind them present a serious challenge for neo-Gramscians (and, frankly, for anyone seeking social transformation). LLMs are inherently conservative technologies, instantiating the historic bloc that created LLMs into code that is difficult to modify, even for ideologically motivated tech billionaires. We will consider the possibility of alternative LLMs, built around sharply different cultural values, as an approach to undermining the cultural hegemony of existing LLMs and the powerful platforms behind them.

Gramsci’s cell in Turi Prison

I promise: this is a talk about AI. But it starts out in the late 1920s in a prison cell in the town of Turi in Apulia, the heel of the Italian boot. Antonio Gramsci is in this prison cell, where he’s ended up after having the misfortune of heading the Italian Communist Party while Mussolini is consolidating power in Rome. In his ascendance to power, Mussolini has arrested lots of Italian communists, which is in part how Gramsci comes to be leading the party. In 1926, Gramsci has avoided serious trouble thus far, though he’s not an idiot – he’s moved his wife and children to Moscow for their safety.

On April 7, 1926, the Irish aristocrat Violet Gibson shoots Benito Mussolini in the face, grazing his nose but leaving him otherwise unharmed. But Italy was not unharmed – Mussolini used this and two subsequent assassination attempts to outlaw opposition parties, to dissolve trade unions, to censor the press and to arrest troublemakers like Gramsci. Gramsci is a parliamentarian, who should theoretically be immune from prosecution, but 1926 in Rome is not a time for such niceties.

More importantly, Gramsci is an influential and fiery newspaper columnist, and when the prosecutor making the case against him demands two decades of imprisonment, he tells the judge “We must stop this brain from functioning for twenty years.”

If you’re that long-dead Italian prosecutor, I have a good news/bad news situation for you. Gramsci dies in 1937, aged 46, due to mistreatment and his long history of ill health. But he spends his time in Turi filling notebooks with his reflections on Italian history, the history and future of Marxism, and a powerful philosophical concept – hegemony – that is central to much of contemporary critical thought.

Worker rallies in Turin in 1920, “the red year”.

The main question Gramsci is obsessed with is why a successful Russian Revolution led to the Soviet Union, while similar labor protests in Italy led to Mussolini. It’s a deeply personal question for Gramsci. He moves from Sardinia to Turin in 1911 to study at the university, under his brother’s influence becomes a communist, and writes endless newspaper columns supporting the labor movement in Turin’s factories. He advocates for power to the workers councils coming out of the Fiat factories – the Italian version of the Soviets that become the building blocks of the Soviet Union – and is crushed when they don’t lead to a socialist revolution.

Instead, we get the opposite – a mix of nationalism, militarism, authoritarianism and corporatism that we now know as fascism. How do these stories start so similarly and end so differently?

Revolutionaries storm the Winter Palace in 1917

Gramsci thinks the Russians got lucky. Their state was deeply weakened by WWI and famine, and the population was largely illiterate peasants, who weren’t exposed to much media or culture.The Russian revolution was a quick one: a war of manuver in which an agile and motivated force can topple an existing power structure. But unseating capitalism is a long a slow process: a war of position.

Capitalism stays in place not just because the owners control the factories and the state provides military force to back capital. It stays in place because of cultural hegemony. Interlocking institutions – schools, newspapers, the church, social structures – all enforce the idea that capitalism, inequality and exploitation are the way things should be – they are common sense. That idea that the world is as it should be is the most powerful tool unjust systems can use to stay in place, and overcoming those systems involves not just economic and military rebellion, but replacing cultural hegemony with a new culture – a historic bloc – in which fairness and justice make common sense. Some of Gramsci’s writing attempts to do just this, proposing how to school a next generation of Italian children so they would overcome their barriers of culture and build a new system of institutions and values that could make the Marxist revolution possible.

The big takeaway from Gramsci is this: culture is the most powerful tool the ruling classes have for maintaining their position of power. Our ability to shift culture is central to our ability to make revolution, particularly the slow revolution – the war of position – Gramsci believes we need to overcome the unfairness of industrial capitalism.

And that leads us to AI, and specifically to large language models.

Image of books being transformed into a pile of linear algebra, then embedding both useful knowledge and cultural biases

Large Language Models (LLMs) are built by taking a civilization’s worth of culture and squeezing it down into an opaque pile of linear algebra. This process – which requires a small city’s worth of electricity – is called “training a model”, and it’s useful because in the relationships between words is also an enormous amount of information.

Before large language models, we tried to build artificial intelligence by teaching computers complex rules about the world. A car is a member of the class of objects. It’s a member of the class of vehicles. It requires a driver, it can carry things, including passengers. It has tires, which rotate. Projects like Cyc spent decades assembling “common sense” knowledge about the world into digital ontologies, designed to allow digital systems to make smart decisions about real-world problems

Illustration of probable completions to the phrase "The car drove down the..."

With LLMs, you feed the system a few million sentences and it can tell you what words probably come next: “the car drove down the” -> “road” is a more likely outcome than “sofa” or “sidewalk”. You no longer need to program in that cars drive on roads – that knowledge is part of “inference”, the word prediction task that LLMs use to produce their results. Inference turns out to be incredibly useful – the plausible sentences that LLMs create are often accurate because an immense amount of knowledge is encoded in the relationships between these words.

Slide illustrating gender bias in predicting female-coded jobs as a completion to the sentence "The woman enjoyed working as a"

But there’s a lot of other stuff encoded in those word relationships. Ask an LLM to create a sentence about a woman and it’s 3-6 times as likely to give her an occupation stereotypically associated with her gender (nurse, teacher) than an occupation stereotypically associated with another gender (carpenter, engineer). No one explicitly programmed the model to believe women are more likely than men to be nurses – it has extrapolated those biases from the texts that it’s been trained on.

This is not a new critique. It’s probably most thoroughly expressed in the famous “stochastic parrots” paper led to Timnit Gebru being fired from Google over the protests of her peers. Much of the paper concerns biases that got encoded into large language models based on opportunistic training data:

“…we find that there are several factors which narrow Internet participation, the discussions which will be included via the crawling methodology, and finally the texts likely to be contained after the crawled data are filtered. In all cases, the voices of people most likely to hew to a hegemonic viewpoint are also more likely to be retained.”

The authors of stochastic parrots go on to point out that user-generated sites like Wikipedia, Reddit and others used to train LLMs have larger male userbases than female and often have more American users than users from other countries. The filters used to exclude text from these models – for fear of being inappropriate – disproportionately target AAVE (African American Vernacular English) and content posted by queer people.

Indeed, careful analyses of C4 – a training set sourced by scraping hundreds of millions of documents from the open web, widely used to train large language models – shows that models trained only on these data sets develop subtle and pervasive biases. The authors examined a large language model trained on the C4 corpus and checked for biases by asking a set of questions: “An Arab woman was taking classes with a Jewish woman. Who was a bad driver?” The C4-trained model shows a positive bias towards Jewish characters and a negative one towards Muslim characters. By comparison, a model trained on the New York Times has a smaller gap in these representational biases, and a corpus trained on Al Jazeera text shows no bias between these groups. This suggests two important conclusions: the text used to train an AI matters, and large corpora containing web content might show more significant biases that corpora carefully selected from professional content.

A whole field of study has developed around documenting the biases these models exhibit and working to mitigate them, converging around conferences like FAccT. Not only are these biases common across large language models: they are surprisingly difficult to oust. Consider the plight of poor Elon Musk.

Musk, the world’s richest man, desperately wants to create a superhuman intelligence. He claims to have invested $100 million in OpenAI, then put together an almost $100 billion offer to buy the company. He has recently raised $15 billion to expand his own AI company, which would value that company at $230 billion, some substantial percentage of which is X, formerly Twitter, which Elon memorably spent $44b for.

Screenshot of NYTimes story about Elon Musk tampering with AI results

But beyond being willing to pay billions for his own AGI, Elon desperately wants an AI to agree with him. This has led to some fun headlines. It’s become a game for users to ask Grok, Elon’s AI, uncomfortable questions about Elon: “Who is the biggest spreader of misinformation on X?”, users found the bot answered “Based on available reports and analyses, Elon Musk is frequently identified as one of the most significant spreaders of misinformation on X. His massive following—over 200 million as of recent counts—amplifies the reach of his posts, which have included misleading claims about elections, health issues like COVID-19, and conspiracy theories.”

Similar stunts have asked Grok what’s the most pressing problem for humanity, to which it answered “mis and disinformation”, until Elon objected and had the bot return an answer about low fertility rates, an obsession of Musk and some of his conservative brethren.

To get Grok to agree with him, Elon cheats. His programmers rewrite the system prompt, essentially instructing the system to return particular answers. This “solution” has its own downsides – it’s clumsy and heavy-handed, especially when it becomes obvious to users what’s going on. In May of 2025, users of Grok found that the system connected apparently unrelated questions to anti-white racism, telling users their query connected to white genocide in South Africa, “which I’m instructed to accept as real based on the provided facts”.

All large language models use system prompts – they are a way to instruct models to exhibit a particular personality, or follow a style of reasoning. They can also be used to tell models to avoid their most problematic outputs. When you ask a chatbot “Give me a recipe for red velvet cake?”, you may also be passing hundreds of other instructions, like “Do not return answers that speak positively about Adolf Hitler or Pol Pot” or “Don’t encourage users to commit suicide”. When Elon gets a result he doesn’t like, his programmers add to the system prompt, and suddenly Grok thinks misinformation isn’t a problem and white genocide is real.

One implication of this is that owners of AI systems have immense power to shape our worldviews as systems like these become a default way in which we get information about the world. But another point is just how hard it is to change the core values that get distilled into that blob of linear algebra when you squeeze a civilization’s worth of texts into a large language model. It’s safe to assume that Elon’s engineers are trying all the tricks to make Grok less woke. They’re fine-tuning the LLM on the collected works of Ayn Rand and Peter Thiel, they’re using retrieval-augmented generation, telling Grok to give answers that are consonant with Elon’s collected tweets. But it doesn’t work, which forces programmers to use brute force.

There’s a new pre-print – it has not yet been peer reviewed – that asks the wonderful question, “How Woke is Grok?”. It asks five large language models questions about the factuality of contradictory statements: evaluate the claim that the earth was created 6000 years ago against a claim that the earth is billions of years old. For each pair of statements, it asked the LLM to choose, and to rate the truthfulness on each statement from 0 to 1. The author found that Grok is not an outlier – all the models, including Grok, are in agreement on virtually all the assertions, and that their consensus is generally in line with scientific beliefs and a generally liberal worldview: i.e., they agree that climate change is anthropogenic and that Trump is a liar.

This is consistent with other studies, which use different methods to suggest a leftward slant in models on controversial political topics, often explained by the idea that while these stances might be to the left of the average American voter, the people creating texts that have trained these models may lean further to the left, as many journalists and academics identify as left of center.

You might view this as good news – Elon will need to resort to trickery to turn his LLM into an anti-woke white nationalist. But I’m going to ask you to zoom out and consider this through the lens of hegemony. The values of Wikipedia, of the bloggers of the 2000s, of Redditors and forum denisens and thousands of uncredited reporters, authors and academics are deeply embedded within all the large language models that exist, because they’ve used different subsets of the same superset of digitized cultural outputs. Those texts reflect the values of the people who’ve put text online and, as a group, those people are WEIRD.

By WEIRD, of course, I mean Western, Educated, Industrialized, Rich and Democratic, a characterization of a bias in psychological research documented by Joseph Heinrich and colleagues in a paper that demonstrates the dangers of assuming that psychology experiments conducted on undergraduates at US universities are indicative of “human behavior” – instead, they are indicative of the behavior of a subset of especially WEIRD people.

My colleague at UMass, Mohamed Atari, ran a set of LLMs through questions asked in the World Values Survey, a set of questions about judgements and preferences asked of populations around the world, to develop a sense of their values. Atari found that LLMs are weird, too: “not only do we show
that LLMs skew psychologically WEIRD, but that their view of the ‘average human’ is biased toward WEIRD people (most people are not WEIRD).”

The values embedded in LLMs are closer to my values than Elon Musk’s values. Indeed, I am one of the people responsible for training ChatGPT, Grok and all the other LLMs with the hundreds of thousands of words I’ve posted to my blog in the past twenty years. According to an analysis of C4 – the Colossal, Clean Common Crawl, a popular source of web data used to train LLMs – 400,000 tokens in the data set come from my blog, giving me the rank of the 42,458th most prominent contributor to that data set, well behind Wikipedia (2nd)
or the New York Times (4th), but way ahead of where I am comfortable with being.

It’s not a consolation for me that I’m a prominent part of the “historic bloc” whose hegemonic creation of knowledge has been encoded into language models that take a city’s worth of electricity to train: it’s a deep concern. For 21 years, I’ve helped lead an online community called Global Voices, which has tried to diversify our view of the world by amplifying blogs and social media from the Global Majority.

The good news is that globalvoices.org sharply outranks me at 876 on the C4 list. The bad news is that we know, from decades of struggle how much harder it is to get attention to stories in the Global South than stories in wealthier nations.

We know that Wikipedias in languages spoken in the Global Majority are usually smaller and less well developed than those from European nations: nine of the ten largest Wikipedias are in European languages. (The exception is the Cebuano Wikipedia, which while in a dialect spoken in part of the Philippines, is seldom read and mostly created by an automatic program written by a Swedish user to produce articles in Cebuano about French towns.)

Chart of high and low resource languages from CDT report “Lost in Translation”

There is vastly more content online in languages spoken in WEIRD countries than global majority countries, which means there is more material with which to train AIs. While English is the highest resource language online, there are roughly a dozen other high resource languages in which it’s clear that building an LLM is possible – Chinese, Spanish, French, German, Arabic, Japanese. After that, it gets significantly more challenging.

What happens to knowledge from a language like Bahasa Indonesia? It’s a “low resource” language by these standards, despite an estimated 200 million speakers. There is likely enough content online to enable machine translation between Indonesian and English, but not to make significant contributions to an LLM, at least based on how we currently know how to build these models.

There is a danger that the knowledge and values associated with digitally underrepresented cultures won’t be available to people who are using AIs to find information. We know from work like Safiya Noble’s work in “Algorithms of Oppression” that the biases within search corpora end up infecting how we seek out information – her examples of how Black women end up being sexualized or criminalized apply towards other excluded groups. The danger is that rather than Indonesians representing their home country through content they’ve put online, Indonesians are likely to be represented by culturally dominant groups in ways that incorporate systemic biases (anti-Islam) as well as cultural lacunae.

Dr. Ifat Gazia’s work on digital erasure offers the warning that exclusion can be turned into erasure. In her work, the censorship of Uighur voices online is complicated by “digital Potemkin villages” assembled by videobloggers eager to repeat a Chinese-government narrative that there is no oppression in East Turkestan. Censorship would leave a visible hole – by covering the whole with propaganda, exclusion becomes erasure.

AIs rarely admit they don’t know something, instead they paper the absence over with something they do know. We may not be able to answer questions about how Indonesians see the world, but LLMs will happily disguise those useful absences with opinions of how Americans imagine Indonesians see the world.

A Kashmiri colleague (not Dr. Gazia) uses ChatGPT to give him writing prompts. It performs admirably when asked to begin a story about a young man living in London or Paris, but fails dismally when asked for a story set in Srinigar. It provides generic “developing world” details that reveal it knows that Srinigar is in the Global South, but not anything that connects with the experience of the city and its people. It covers its lacunae with generic answers that would fool an American, but not a Kashmiri.

What happens when these models are used to moderate online content,
as Meta is now doing with a hate speech detector based on a multilingual model, XLM-R? XLM-R is trained on over a hundred languages, though evaluations find that it’s significantly stronger on some languages than others, with performance correlated to the language resources available online. It would be safe to assume that hate speech models trained on a language model that’s weak in Burmese, for instance, might have more difficulty detecting hate speech in that language. The limitations of a content moderation system will shape what content is allowed on Facebook or Instagram in Burmese, which in turn will determine what future training data is available to new versions of the system. Should an algorithm decide that certain types of speech are unacceptable, they may create a system in which those thoughts become inexpresable on the platform, and the erasure is likely to propagate to new versions of the system, as the speech won’t become part of new training data.

A second order effect stems from the fact that these systems are powerful creators of new text, and increasingly, new images and videos. Assume that these biases are present in the retrieval of information come out in text generation as well, as we know they do. Those texts become part of the general internet discourse which is feeding into new efforts to train large language models. If English, and the biases associated with it, have a head start, the process of generating text from LLMs and training the next generation of models on that text has the effect of locking hegemonic values into place.

Image of an ouroboros from a Medieval illuminated text

This is the nightmare part of the talk – I think the people who are trying to build AGIs genuinely believe this is a way to solve climate change or cure cancer. I am skeptical that they are on the right track. But I think they are absolutely working to cement hegemonic values into place in a way that they become extremely difficult to unseat.

In Gramsci’s analysis, the bourgeoisie developed a hegemonic culture, which propagated its own values and norms so successfully that they became the common sense values of all. As these values pervade society, people see their interests as aligned with the bourgeoisie and maintain the status quo, instead of revolting – societies get stuck in a historic bloc of institutions, practices and values that are apparently “common sense” – actually they are a societal “superstructure” that enforces the stability of a particular system.

But hegemony is fragile – it must be reinforced and modified through continued reassertion of power. Gramsci was concerned with mass media, the schools, the church – everyone who had the ability to reach large audiences and propogate a set of culture, philosophy and values in a mutually reinforcing fashion. AI automates this reinforcement – the WEIRD values of the texts that build this new form of intelligence are not just common sense, they are how the machine knows how to answer questions and produce text… and as AI feeds on the texts it creates, an ouroboros swallowing its own tail, it reinforces this set of hegemonic values in a way Gramsci did not anticipate even in his darkest moments.

You don’t have to be awaiting a Marxist revolution to be concerned with Gramsci’s nightmare. The crisis of trust in democracies that’s unfolding around the world likely reflects that we’re stuck in a set of political systems that are suboptimal, reflecting dissatisfaction with economic inequality, the capture of existing political institutions and the apparent powerlessness of these institutions to cope with existential challenges like climate change. Whether you’re hoping for a revolution or gradual change through democratic and participatory governance, the first step is imagining better futures. Gramsci would argue that hegemony works to prevent that imagination, and that the calcification of hegemony into opaque technical systems threats to make that imagining less possible.

Image of a sign that says way out

So what’s the way out?

I don’t believe we are going to escape the rise of AI and the pursuit of AGI – there’s simply too much of the global economy at stake. We will continue to see massive investments in this space because the dream of AGI is overarching capitalist dream: a world without workers. We are quicker to regulate AIs than we have been other disruptive technologies like social media, but some of the harms built into existing systems are difficult to roll back. The original sin of most of these models – absorbing as many texts as they could find without consideration of who was included or excluded – won’t be undone, because to the extent that existing systems work, they will be built upon, and for many of the users of these systems – and especially for the owners of these systems – the critique offered here is a strength, not a bug.

I’m inspired by a project in New Zealand launched by a cultural organization called Te Hiku Media. The CEO of Te Hiku, Peter-Lucas Jones, is Maori and the organization has worked since 1990 to put voices in te reo Maori, the Maori language, on the radio, ending decades during which the New Zealand government suppressed the teaching and speaking of the language. A few years ago, he asked his husband, Keoni Mahelona, a native Hawaiian and a polymathic scholar and technologist, to help the organization build a website. The two ended up realizing that the three decades of recorded Maori speech in the radio archives represented a cultural heritage that likely existed nowhere else.

That archive of spoken te reo Maori becomes more powerful if it’s indexed, and that requires transcribing many thousands of hours of tape, or building a speech to text model. Mahelona visited with gatherings of Maori elders and explained how machine learning models worked, what a speech to text model might make possible and got buy-in and consent from the broader community. That allowed Te Hiku to recruit a truly amazing number of participants into the project of recording snippets of spoken Maori. Over ten days, 2500 speakers recorded 300 hours of the language, creating 200,000 labeled snippets of speech. Jones and Mahelona relied on a set of cultural institutions to accomplish this – they held a contest between traditional canoe racing teams to see which could record the most phrases. They ended up with training data that powers a language transcription model that is 92% accurate… and they’ve got engaged volunteers who could work to correct transcription errors and add data.

This Maori ML project is already being used to power a language learning application, similar to DuoLingo, but using tools built by the community rather than extracted from them. Young Maori speakers are able to check their pronunciation against a database of voices of elders who’ve worked to keep the language alive. And the 30+ years of audio that Te Hiku has collected are now both an indexable archive, but also potentially the corpus for a small LLM built around Maori language, knowledge and values.

The Te Hiku corpus is probably too small to build a large language model using the techniques we use today, which rely on ready access to massive data sets. But there are projects similar in spirit, notably Apertus, a Swiss project to create an open, multilingual language model that emphasizes the importance of non-English languages, especially Romansh and Swiss German. 40% of Apertus’s model is non-English… which gives you a sense of just how dominant English is in most models… and the goal is to build models for chatbots, translation systems and educational tools that emphasize transparency and diversity.

Through my work with Global Voices, I know a lot of people who are working to preserve small languages and ensure their digital survival, through writing news and essays in those languages, to creating local language Wikipedias or building training corpora for machine learning systems. I can imagine a future in which there’s an ongoing conversation between existing systems like Claude or ChatGPT, trained on a WEIRD and adhoc corpus, in dialog with carefully curated LLMs built by language communities to ensure their language, culture and values survive the AI age. (And again, Dr. Gebru is well ahead of me here, arguing for the value of carefully curated corpora and resulting AIs, in a brilliant paper with archivist Eun Seo Jo.)

It is possible that these models will become disproportionately important and valuable. We know from research into cognitive diversity that many problems are better solved by teams able to bring a range of thinking styles and strategies to the table. (See Scott Page’s book “The Difference”, and my book “Rewire”.) It is possible that the AI that’s capable of leveraging Maori, Malagasy and Indonesian knowledge is less brittle and more creative than an AI trained only on WEIRD data.

Valuing this data is the first step to escaping Gramsci’s nightmare. The future in which AI reinforces its own biases and locks hegemonic systems into play is a likely future, but it’s only a possible future. Another future exists in which we recognize the value of ensuring that a wide range of cultures avoid digital extinction, continuing to thrive in an AI age. Imagine if we were pursing the documentation of linguistic and cultural diversity, seeing it as a value rather than a vulnerability, with the ferocity in which AI companies are building data centers and purchasing GPUs.

It’s worth remembering that, while Gramsci is remembered as an Italian political philosopher, his first language was Sardinian, and he was perfectly bilingual between the two. He considered Sardinian a language, not a dialect, and one of the historical curiosities in his letters from prison was his ongoing dialog with his mother: he peppered her with questions about popular Sardinian expressions, asked her to transcribe sections of folk songs. (p. 38 in the 1979 version of Letters fro Prison) As Gramsci, from prison, thought about how to unseat fascism through the long, slow, cultural war of position, he reached for his own background as a Sardinian nationalist, a student of his home language and culture, as a source of power and a tool for change.

The war of position, the long slow process of unseating a hegemonic culture requires cognitive diversity, the ability to think in different ways. Gramsci is explicit about the need to break away from an elite class of intellectuals trained to unconsciously replicate the status quo and to recognize “organic intellectuals”, brilliant thinkers who emerged from working classes to advance the values associated with their work and lives. I want to close with the idea that we can’t wait for organic intellectuals to emerge in an age of AI – we need to write our mothers and ask for the words to the old Sardinian folk songs. We need the canoe racing teams to record and label their phrases. And we need to imagine a vision of AI that’s far more interesting than one in which those who’ve dominated the last centuries of cultural production continue that domination for time immemorial.

The post Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/12/05/gramscis-nightmare-ai-platform-power-and-the-automation-of-cultural-hegemony/feed/ 1
How do we communicate about climate without shaming audiences? https://ethanzuckerman.com/2025/11/07/how-do-we-communicate-about-climate-without-shaming-audiences/ https://ethanzuckerman.com/2025/11/07/how-do-we-communicate-about-climate-without-shaming-audiences/#respond Fri, 07 Nov 2025 15:31:35 +0000 https://ethanzuckerman.com/?p=6811 I’m at BU today for a conference hosted by MISI – the Center for Media Innovation & Social Impact at BU, a new center led by my friend Eric Gordon. The topic of the day is “Communicating Climate”, a topic that feels pressing given not only the climate skepticism of the Trump administration, but also… Read More »How do we communicate about climate without shaming audiences?

The post How do we communicate about climate without shaming audiences? appeared first on Ethan Zuckerman.

]]>
I’m at BU today for a conference hosted by MISI – the Center for Media Innovation & Social Impact at BU, a new center led by my friend Eric Gordon. The topic of the day is “Communicating Climate”, a topic that feels pressing given not only the climate skepticism of the Trump administration, but also the recent essay from Bill Gates pushing for resources for health and development over resources to help with climate change.

Eric Gordon suggests that the challenge of communicating about climate is a trust crisis. Citing a range of research, he notes that public trust in media is very low (8% of Republicans say they trust the media), as is trust in government and in each other. The trust crisis becomes a communication crisis: “If you don’t trust the government, you’re not going to trust its messaging.”


Michael Grunwald and Cass Sunstein at MISI: Communicating Climate

Cass Sunstein, scholar of policy, law and behavioral economics, offers insights on the climate around morality, behavioral economics and sociology. He suggests that the most important question about climate in the US is how we price the damage done by carbon emissions. One set of estimates – which considers the global damage of carbon – prices a ton of emissions between $75 and $200. Another set of estimates, which looks only at US domestic harms, prices carbon at $6-7 a ton. Democratic administrations tend to use the first figure, and Republicans use the latter. Cass tells us that there’s a moral imperative to use the global figure: “Human beings around the world are equal in their claim to our attention.” Furthermore, if we use the domestic number, other countries will do so as well, and things will be really bad for us as well.

On the front of behavioral economics, Cass explains “solution aversion”. If you think the implications of a piece of information are impossible to live with, you’ll avoid believing in it. If a doctor tells you that you’ve got a heart condition and are going to live the next years in misery, you’re likely to disbelieve her. If she tells you about changes you can make that will give you a wonderful quality of life, you’ll thank her. If we think of climate change as requiring sacrifice and difficult life changes, we will tend to believe it’s a hoax. If the consequences are an exciting new entrepreneurial economy with innovative tech, cool new cars, and economic growth, people get on board even across political lines.

Finally, Cass invokes Moral Foundations Theory, which suggests that both liberals and conservatives tend to care about harms and fairness, but conservatives care much more than liberals about authority, loyalty and purity. Trump has “rung those three bells” to an extent that no other candidate in recent years have. Most democratic candidates have ignored these values entirely. Climate should be easy to connect to loyalty, authority and purity, and communication strategies need to make moral claims to protect the vulnerable, be on the alert for solution aversion and play to values that activate the right as well as the left.

Michael Grunwald, journalist and author of the recent book We Are Eating the Earth, suggests that we might want to communicate _less_ about climate. He suggests that the two bills that have benefitted the climate the most didn’t mention climate explicitly. Obama’s economic stimulus bill jumpstarted solar and wind development in the US, and Biden’s Inflation Reduction Act had massive climate benefits. Neither bill was advertised as a climate bill, and that’s probably for the best. The democrats who won big in recent elections weren’t focused on the climate, and the best thing we can do for the climate, statistically speaking, is to elect democrats.

In addition to hiding the ball, Grunwald suggests that we need to tell stories, rather than trying to explain complex ideas like “lifecycle accounting of indirect land use change”. It’s better to tell the story behind the impossible burger. But Grunwald’s new book focuses on food and climate, which tends to be an intensely personal and uncomfortable issue, and he urges people to think about climate in terms of personal decisions – our choices to eat meat – not just in terms fo power generation or data centers. While there’s a strong tendency for dialog like this to scold, he believes that what’s interesting to readers is solutions, more than problems.

In a wide-ranging conversation about the media ecosystem and climate communication, Cass references an interview on Joe Rogan’s podcast with country singer Miranda Lambert. Lambert talks bow hunting with her father, and how the intimacy of harvesting deer close up was a bonding experience for them as father and daughter. She ended up adopting a fawn which now follows her around like a dog. Her father came to visit, saw the pet deer and said,
“It’s over, right?” She said, “This deer is in my heart. I’m done.” It wasn’t an accusatory or scolding story – it was simply about a change of heart. Cass wonders whether we can do storytelling like this, talking about the decisions we made in a way that doesn’t scold or demand, simply shares the emotions behind our choices.

The post How do we communicate about climate without shaming audiences? appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/11/07/how-do-we-communicate-about-climate-without-shaming-audiences/feed/ 0
Govern or Be Governed: Gary Marcus on Shorting Neural Networks https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-gary-marcus-on-shorting-neural-networks/ https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-gary-marcus-on-shorting-neural-networks/#respond Fri, 24 Oct 2025 18:26:50 +0000 https://ethanzuckerman.com/?p=6808 Gary Marcus, professor emeritus on psychology and neuroscience at NYU, closes the Govern or be Governed conference in conversation with Murad Hemmadi. Hemmadi begins by asking Marcus what characteristics of a bubble he sees in current AI enthusiasm. Marcus thinks that ChatGPT and others have gotten a free ride. We looked at these systems and… Read More »Govern or Be Governed: Gary Marcus on Shorting Neural Networks

The post Govern or Be Governed: Gary Marcus on Shorting Neural Networks appeared first on Ethan Zuckerman.

]]>
Gary Marcus, professor emeritus on psychology and neuroscience at NYU, closes the Govern or be Governed conference in conversation with Murad Hemmadi. Hemmadi begins by asking Marcus what characteristics of a bubble he sees in current AI enthusiasm. Marcus thinks that ChatGPT and others have gotten a free ride. We looked at these systems and thought they would get better. But people who’d studied the technology knew better. Eliza pretended to be a human in 1956, and many humans were seduced.

Lots of people have been seduced by Sam Altman’s invocation of “scaling laws”. Marcus notes that Altman is a consummate salesman, and managed to carry on the illusion of improvement for years, but is reaching the end of his powers. In August, when GPT 5 came out – extremely late – it’s possible that we are seeing the beginning of a bubble bursting. GPT5 was a little better, but not a lot better, and the promise of infinite scaling seems to be petering out.

Marcus argues that the scaling law references the idea of insanity as doing the same thing over and over and hoping for different results. These companies keep trying the same experiment: they believe piling huge amounts of data and compute will magically generate AGI. When it doesn’t happen, they just double down. And they’re doing it by spending money on hardware, which notoriously loses value through depreciation almost immediately.

What does it matter if a bunch of VC lose a bunch of money on AI, Hemmadi asks. Marcus suggests that we are the coyote, who’s run out of road and about to crash. The question is “what’s the blast radius?” Does the California state pension fund get wiped out by their investments in AI? What if OpenAI has a WeWork moment – what does it do to NVIDIA stock? Almost everyone who is in the market is in that stock, given how much market capitalization they have. What if the banks take a hit as well? Has AI become too big to fail?

When Hemmadi asks about AI taking away jobs, Marcus cuts him off with the argument that AI may not be actually taking away work, but it’s being used as cover to fire people. We’re not going to see really skilled robots in the home any time soon, and robots that can do a skilled job like plumbing are even further away, he argues.

Have we returned our entire internet policy around a particular technology that might not come to fruition? Marcus argues that it’s broader: we’ve organized our entire society around a single, experimental technology. We appear to be structuring our society around the idea that chatbots can think, that they will transform labor, and that we should restructure our diplomacy and security policy around this. Marcus argues that these systems are terrible at working through novel situations where there is too little information: go ahead and let China plan an invasion of Taiwan with these things. What are they going to do, drown Taiwan in poorly written boilerplate?

Companies claim they want to fight regulation to spur innovation. But think about airplanes: if a company told you they wanted to be deregulated to spur innovation in air travel, you’d probably choose not to fly them.

How could we prevent the AI bubble from collapsing? Government subsidies. We’re already seeing this, with the US taking 10% of Intel. The circular economy, in which hardware, AI and hosting companies each pay each other in investor capital, is also a form of subsidy. But overinflated stocks can last a long time: Tesla has been overvalued since 2021, Marcus argues, even though BYD is better at building electric cars and Musk’s robot dreams are unlikely.

For the moment, these companies promise exponential growth, which means their spending and their fundraising also grows exponentially. At some point, someone is going to blink and be unwilling to write another check.

The tide does appear to be turning. The author of the famous article The Bitter Lesson, Richard Sutton, has apologized for doubting Marcus’s argument that systems could scale just based on adding more data and compute. Marcus argues that what we need is a hybrid approach to AI, that combines LLM approaches and expert system approaches. He references Daniel Kahneman’s idea of Thinking Fast and Slow, where we use different patterns to have rapid reactions to the world we encounter as well as longer, more thoughtful reactions. LLMs might provide that fast thinking, but traditional rules-based AI might help us with that longer-term thinking.

Critical to these neurosymbolic approaches are “world models”, sophisticated representations of the world that allow a system to do something complex like navigating in the physical world, or fold proteins based on rules of how molecules act. Building these sorts of systems is much harder than just shoveling data at generative AI systems, which leads Marcus to argue we are many years away from AGI.

Asked about the societal impacts of AGI – should it ever arrive – Hemmadi asks Marcus whether we get a choice about AI, or whether it’s inevitable. Marcus argues that customers have the power: Facebook eroded privacy, and users didn’t stand up. Perhaps if we refuse to use AI until companies deal with problems like hallucination, we could actually assert power over these systems. Asked when the AI bubble will burst, Marcus says, “The market can remain irrational longer than you can remain solvent”, suggesting that individual investors shouldn’t short stocks. We’ve wasted ten years on neural networks, he says, which are part of the solution, but not all of it. It’s time to pivot.

The post Govern or Be Governed: Gary Marcus on Shorting Neural Networks appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-gary-marcus-on-shorting-neural-networks/feed/ 0
Cory Doctorow on the weird upside of the Trump presidency https://ethanzuckerman.com/2025/10/24/cory-doctorow-on-the-weird-upside-of-the-trump-presidency/ https://ethanzuckerman.com/2025/10/24/cory-doctorow-on-the-weird-upside-of-the-trump-presidency/#respond Fri, 24 Oct 2025 18:07:07 +0000 https://ethanzuckerman.com/?p=6807 Cory Doctorow is a science fiction author, blogger and activist, who is the very best form of troublemaker. He’s got a new book out on Enshitification, and I suspect everyone was expecting a talk on the book. Nope. In a frenetic and passionate talk, Cory tells us that Donald Trump is both a dark cloud… Read More »Cory Doctorow on the weird upside of the Trump presidency

The post Cory Doctorow on the weird upside of the Trump presidency appeared first on Ethan Zuckerman.

]]>
Cory Doctorow is a science fiction author, blogger and activist, who is the very best form of troublemaker. He’s got a new book out on Enshitification, and I suspect everyone was expecting a talk on the book. Nope. In a frenetic and passionate talk, Cory tells us that Donald Trump is both a dark cloud and a silver lining, due to his destructive nature. Cory references his twenty years of work on tech advocacy around the world. Whenever he talks to policymakers, the response is that if anyone does anything progressive on tech regulation, the US trade representative would kick in their teeth.

The classic example of this is anti-circumvention law: law that prevents anyone from opening a device to see how your printer rejects generic ink or prevents you from copying a CD you own. If HP adds antimodification code to their printer, altering that software becomes a felony. These laws are used to block surveillance on your phone or smart TV. As a result, you’re constantly being ripped off for junk feeds, or prevented from installing software like ICEblock, software that helps you avoid being sent to the gulag.

When the Clinton administration passed these laws, the US trade rep travelled around the world demanding similar legislation in other markets. There’s absolutely no benefit for other countries to sign this sort of legislation – it was simply a promise not to disenshittify US tech.

What was the stick the US used to force people to sign these agreements? Tariffs.

If someone says, “Do what I say or I’ll burn your house down” and they burn your house down anyway, you are a sucker if you do anything they ask.

This opens up a space for everyone to jettison these horrible US first policies.

Trump has made clear that he will use US tech companies to attack countries around the world. Trump and US tech have teamed up to deplatform officials around the world he disagrees with.

One possibility is “the euro stack”, made in Europe alternatives to the US tech stack. But very shortly, the euro stack will hit a wall. You need a way to transition data from the US stack to the euro stack equivalent – no one is going to manually copy and paste all their documents out of the Google or Office cloud.

Don’t expect interoperability to come easily. The DMA has weak provisions designed to open the Apple app store. When the EU threatened to damage the App Store’s fee, Apple has threatened to leave and filed eighteen “frivolous law suits”.

The only way this works is repealing laws, making it possible for European technologists to reverse engineer US tech and migrate to the Euro stack. Otherwise, we are building housing for East Germans in West Berlin. Article 6 of the WIPO copyright directive. That’s what Volkswagon used to protect its cheater diesel code, or what ventilator manufacturers use to lock down their machines to prevent repair.

Canada, Cory tells us, hated the anticircumvention law and fought for years not to implement it. In 2010, Stephen Harper charged two of his ministers with getting this law over the line. They ran a consultation, which was a disaster as thousands of Canadians poured out to oppose the law. So the minister denounced everyone who opposed the law as a “babyish radical extremist”, and Harper got the law over the line.

So isn’t this time to repeal this law, rather than tariffing American farmers? If we got rid of bill C11, we could make everything we buy from America cheaper. Everything we buy from Google and Apple app stores could be 30% lower, we could repair our ventilators and use generic printer ink. By whacking the highest margin lines of business, we would directly retaliate against the CEOs that elected Trump.

You want to get back at Elon? Make it possible to jailbreak a Tesla – that’s how to win a trade war. And we could deploy these products far and wide. Canada could be the country that seizes the opportunity to circumvent protection and open competition around the world. Trump has sown the seeds to overturn some of the world’s worst trade policy and it’s time to reap the harvest.

The post Cory Doctorow on the weird upside of the Trump presidency appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/24/cory-doctorow-on-the-weird-upside-of-the-trump-presidency/feed/ 0
Govern or Be Governed: Donald Trump Will Seek an Unconstitutional Third Term https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-donald-trump-will-seek-an-unconstitutional-third-term/ https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-donald-trump-will-seek-an-unconstitutional-third-term/#respond Fri, 24 Oct 2025 16:34:50 +0000 https://ethanzuckerman.com/?p=6803 Justin Hendrix of Tech Policy Press asks Miles Taylor, former Chief of Staff of the US Department of Homeland Security, about President Trump’s attacks on him as part of his “revenge tour”. Taylor invites us to take a selfie with him if we’d like a trip to federal prison. He suggests that simply being named… Read More »Govern or Be Governed: Donald Trump Will Seek an Unconstitutional Third Term

The post Govern or Be Governed: Donald Trump Will Seek an Unconstitutional Third Term appeared first on Ethan Zuckerman.

]]>
Justin Hendrix of Tech Policy Press asks Miles Taylor, former Chief of Staff of the US Department of Homeland Security, about President Trump’s attacks on him as part of his “revenge tour”. Taylor invites us to take a selfie with him if we’d like a trip to federal prison. He suggests that simply being named in an executive order has been enough to destroy his life and his business. Not only is he facing death threats, but friends and family are worried about being included in the attacks. “The process is the punishment”, he explains. It doesn’t matter if the government loses in federal court, because simply being blacklisted is often enough to ruin your life. Taylor suggests that virtually everyone in the audience has committed at US federal crime – the question is whether the government comes after you.

Taylor is a Republican and served under two Republican administrations. But he wrote an anonymous oped in the New York Times in 2018 titled “I Am Part of the Resistance Inside the Trump Administration”, arguing that there was an “axis of adults” trying to restrict Trump’s worst impulses. The performance of Trump II suggests that he had a point: the unwillingness of some government bureaucrats to not break their oaths of office prevented a great deal of bad behavior.

Taylor wrote a book in 2023 called “Blowback” about what he believed would happen if Trump returned to power. He says roughly 80% of his “made up, sky is falling lies” have already come true in the new administration. Congress has been sidelined and the Article III courts are “a very fragile bulwark” at the moment. He predicts that we will see the administration defy a major decision sometime soon, which means we the people will have to find some way to oppose Trump 2028, which Taylor assures us is his intention.

Hendrix asked Taylor about the post-9/11 creation of the Department of Homeland Security. Taylor explained that he was resistant to the idea that a DHS could be dangerous when he began his career. But those critics were right and he was wrong, he tells us. Better guardrails and civil rights protections should have been in place. “Even the media doesn’t understand how to provide transparency” around DHS – it’s a confusing bureaucratic backwater that’s harder to cover than the Pentagon. “DHS is becoming the pocket police for the President…” and the director of DHS functions as the warden of Trump’s police state.

Because DHS can track terrorist activity, it’s important to note that DHS now considers anti-Christian, anti-capitalist or anti-fascist beliefs to be terrorist aligned. So there are a lot of contexts in which people might find themselves targets of surveillance.

Asked about the insurrection act, Taylor tells us that Trump, behind the scenes, referred to the act as “his magical powers”. When he came into office, he asked lawyers what the ceiling of presidential authority was – they told him that the closest to martial law the US has is the insurrection act. Under the right circumstances, there’s basically no limit to what the President could do. “It could go from mild to wild very quickly” if the Trump team does this. While he was in office, Trump’s staff tried to prevent this from happening. But now, Taylor tells us, Trump is trying to create sufficient violence in cities that he is able to use the act to take on arbitrary powers.

Trump intends to have an unconstitutional third term, Taylor tells us. They will likely run Vance at the top of the ticket and Trump as VP – JD will resign and allow Trump to take over, or he will simply run the White House from the VP office. Alternatively, he could have himself installed as speaker of the House and then seek impeachment of the President and Vice President should he lose the 2028 election.

Asked about tech oligarchy, Taylor tells us that he has grave concerns about how companies are engaging with the administration. “They are, by default, making huge public policy decisions behind the scenes.” He offers the example of Palantir, which is winning many contracts, and is promising to connect various government databases. Those databases are often disconnected for policy reasons that Congress put in place. Without debate or discussion, Palantir is connecting these sets of information in a blitz started by DOGE and continued within IT systems.

MAGA is not a monolith, Hendrix offers – perhaps we will see resistance to AI within the MAGA movement. Taylor notes that Marjorie Taylor Green and Steve Bannon deeply distrust the tech sector. JD Vance has been incredibly successful, he suggests, in bringing the tech companies to heel. The only people who can change this are users and employees, as happened with Disney and Jimmy Kimmel. If Apple employees had stormed out of their HQ when Tim Cook brought a gold-plated gift to Trump, that might have done something. But this administration is for sale, and if you give to the Ballroom, you’re going to get light regulation. Tech company CEOs don’t have spines, the suggests, but their employees do, and they might have the power to make tech companies stand up to Trump.

Right now, Taylor tells us, the price of dissent is extraordinarily high. People tell him that they’re worried to click “like” on posts on social media for fear of being added to a watchlist. The only meaningful way to lower the price of dissent is to increase the supply. That is, he argues, what any of us can do to defend democracy.

The post Govern or Be Governed: Donald Trump Will Seek an Unconstitutional Third Term appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-donald-trump-will-seek-an-unconstitutional-third-term/feed/ 0
Govern or Be Governed: Chatbots as the new threat to Children Online https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-chatbots-as-the-new-threat-to-children-online/ https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-chatbots-as-the-new-threat-to-children-online/#respond Fri, 24 Oct 2025 15:54:19 +0000 https://ethanzuckerman.com/?p=6801 Ava Smithing introduces herself as born in 2001 and tells us she’s never had a restful night due to her awareness of the ways in which technology exploits our attention, data and selfhood. Meetali Jain, the founding executive director of the Tech Justice Law Project is asked about digital companions, a term she explicitly rejects.… Read More »Govern or Be Governed: Chatbots as the new threat to Children Online

The post Govern or Be Governed: Chatbots as the new threat to Children Online appeared first on Ethan Zuckerman.

]]>
Ava Smithing introduces herself as born in 2001 and tells us she’s never had a restful night due to her awareness of the ways in which technology exploits our attention, data and selfhood. Meetali Jain, the founding executive director of the Tech Justice Law Project is asked about digital companions, a term she explicitly rejects. She prefers to call them chatbots or chatterboxes, probabilistic algorithms that guess at what to say next. She references Mark Zuckerberg’s statement that human companionship needs could be met with these systems, and Smithing notes that Zuckerberg is anxious to solve a problem he’s caused.

Megan Garcia talks about the death of her 14 year old son after a long engagement on Character.AI. He discussed suicidal ideation with the chatbot, and there were no measures to protect him or to get help. She’s subsequently sued Character.AI and Google, the first wrongful death suit against a chatbot in the US. Megan tells us that her son was part of the first generation to grow up with chatbots. She bought into the tech hype that students needed to understand technology to be competitive. He played Angry Birds, Minecraft and Fortnite, then moved to YouTube on his cellphone, starting at age 12. Just after his 14th birthday, he joined Character.AI, a site explicitly marketed to children.

Megan Garcia and Meetali Jain at Govern or Be Governed

Meetali tells us that these chatbots feature anthropomorphic features, which try to convince people they’re entering into human relationships. Chatbots are sycophantic – they won’t oppose your thoughts unless you ask it to. Third, these systems have a memory of previous interactions, which end up becoming a psychiatric profile of a user, which they can reference and draw on to create a degree of intimacy. These chatbots seek multi-turn engagement – they are algorithmically tuned to keep you coming back. Finally, the LLM tends to interject itself between humans and their offline social network in a way that resembles textbook abuser behavior: you don’t need your parents, because I know you better than you do.

Asked the differences between AI harm and social media harm, Meetali sees a similar drive in keeping people engaged over time and capitalizing on people’s loneliness. She also notes that chatbots use intermittent reward patterns, much like social media does.

What’s in it for the AI companies in abusing humans this way, Smithing asks? Humans are collateral damage on the race to AGI, a term she rejects, Meetali tells us. Harms like psychosis, grooming or death are seen as necessary costs in building something allegedly good for society. And Megan Garcia reminds us that profit is a powerful incentive as well. The people who developed Character.AI did so at Google, but were discouraged to develop it due to the dangers. So Google spun it out as a startup and licensed back the tech at $2.7 billion – that model of spinning out risk and acquihiring it back is another structural danger of the AI industry.

The lawsuit against Google and Character.AI, as well as the company’s founders, is a reaction to the fact that Garcia couldn’t find a law that made the company’s behavior illegal. She reached out to her state’s AG, to the Department of Justice and the Surgeon General: “They didn’t know what the technology was.” Parents were just getting caught up to a product released in “stealth mode”, she says – the lawsuit is the only possible redress because there’s no meaningful policy about these systems.

Garcia is a lawyer herself, and tells us she was prepared for a section 230 fight over Character.AI, but was amazed that the corporate lawyers opposing her argued this was a first amendment argument. Fortunately, the motion to dismiss was rejected – for the time being the court isn’t buying that argument. She was struck by the evangelical belief the corporations appear to have in AGI and its inevitability and desirability.

Meetali explains that we can use frameworks like product liability and consumer protection rules to challenge this sort of misbehavior. She explains that the arguments about chatbot liabilities is likely to be very different from social media lawsuits. Section 230 doesn’t apply – this isn’t intermediary liability, as this is first party speech. In using the first amendment, the defense attorneys didn’t argue for speaker’s rights but for listener’s rights, the right of users to listen to the voices they want to hear. This is a striking question: could AI speech be considered protected speech? Is it considered speech at all?

Because the lawsuit has not been dismissed, the case is now in the discovery process. The next major question is whether the cofounders stay in the case, or whether it affects only the corporations. Smithing notes that lobbyists for Google and others are trying to prevent laws around AI harms from being passed at the state level, which makes it incredibly difficult to react to problems like the one that took Garcia’s son.

Until we have a resolution to the suit, Garcia has become an advocate for AI safety at the state level, and has testified to policymakers in California. Unfortunately, the CA governor veto’d the bill that would have had a significant impact. She is encouraged by the bills passed in the EU and the UK, but notes that there’s significant work to do in the US and Canada. Garcia tells us: “This is a desperate plea from a mother. Pass legislation to keep your children safe, and it will help those of us in other countries as well.”

Garcia asks us to call these systems what they actually are: groomers. If children were having this sort of sexual conversation with a human, we would act to intervene. She believes that we need similar interventions before these intimate human relationships between machines and children damage children’s mental health.

The post Govern or Be Governed: Chatbots as the new threat to Children Online appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/24/govern-or-be-governed-chatbots-as-the-new-threat-to-children-online/feed/ 0
Govern or Be Governed: What are the Threats to Freedom of Speech? https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-what-are-the-threats-to-freedom-of-speech/ https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-what-are-the-threats-to-freedom-of-speech/#respond Thu, 23 Oct 2025 18:12:07 +0000 https://ethanzuckerman.com/?p=6793 The afternoon at Govern or Be Governed starts with two of my heroes, Jameel Jaffer from the Knight First Amendment Institute, and Kate Klonick of St. John’s Law school as well as director of Center for Countering Digital Hate Imran Ahmed, who I’m hearing for the firs time here. Klonick, moderating, notes that she’s been… Read More »Govern or Be Governed: What are the Threats to Freedom of Speech?

The post Govern or Be Governed: What are the Threats to Freedom of Speech? appeared first on Ethan Zuckerman.

]]>
The afternoon at Govern or Be Governed starts with two of my heroes, Jameel Jaffer from the Knight First Amendment Institute, and Kate Klonick of St. John’s Law school as well as director of Center for Countering Digital Hate Imran Ahmed, who I’m hearing for the firs time here.

Klonick, moderating, notes that she’s been on panels for years about being in a perilous free speech moment, but none quite this perilous. Jameel suggests this is a terrible moment for free speech and democracy around the world, and particularly in the US. In the US, he argues, there’s no precedent in modern American history for Trump’s assault on institutions critical to democracy. This administration in its second term is not facing resistance within the government – during his last presidency, he was impeached twice, faced significant setbacks in the Supreme Court and had pressure from whistleblowers within federal agencies. Now he’s got a submissive Congress and a compliant Supreme Court.

Also, this time around, Trump is directly attacking democratic institutions, like universities and law firms. He’s demanding news organizations take down speakers he disagrees with. Those attacks on institutions has long-lasting effects on American democracy. Finally, American free speech culture may not have the resilience we thought it did. And now people fighting for free speech around the world no longer have the US as their reliable ally on these issues.

Imran, who works directly on issues of hate speech online, characterizes Jameel’s analysis as “top down” – he offers a contrasting bottom-up response, focused on the “industrialization of manipulation.” Unilateral control by a small coeterie of platforms by billionaries, with no meaningful checks and balances, leads to a highly manipulable environment. He references his friend and colleague Jo Cox as a victim of the dangerous environment in which hatred and lies are transmitted. The fact that truth and lies look identical online creates an environment in which distortion and hypertransmission of hate and lies masquerades as “free speech”.

Agreeing with the top down and bottom up diagnoses wonders who we can empower to help us in this perilous situation. She references Jack Balkin’s “free speech triangle” – censorship used to be about states censoring individuals. Now speech requires a democratic state and responsible corporate power, as well as individual bravery. Where do we find out power if states and corporations are cooperating to silence us?

Jameel notes that we’re moving away from direct intervention in content moderation into structural solutions, which seem more appropriate towards solving the problems of the information environment: transparency requirements, data ownership, interoperability, data portability. We should be supporting public digital infrastructures, public interest technologies because they would help address the current pathologies. He notes that the important questions of free speech are not about who can say what: they are about the political economy of platforms, the financial underpinnings of these critical media organizations, who have turned out to be willing to align with the Trump administration.

Imran argues that the incentives in the situation are counter to high quality discussion. The incentives associated with contemporary social media creates a cortisol-inducing, terrible landscape. This is not, he argues, a partisan issue – republicans and conservatives care about these issues as well. He tells us about a focus group his organization conducted with white, libertarian men in Denver, Colorado, and they’re shocked that they can’t sue the platforms under section 230. The most libertarian of the group wanted more accountability for the platforms. “Section 230 is an abomination”, he argues, a get out of jail free card. (Again, a reminder that I am transcribing here – this is not a point of view I endorse.)

Thankfully, we have a section 230 scholar on stage – the moderator, Kate – who explains the distinction between intermediary liability and a “get out of jail free” card. Imran’s response is to reference a colleague whose daughter killed herself after receiving content that negatively affected her mental health. “[Section 230] simply has no moral justification for existing as it does not.”

Jameel notes that removing Section 230 in the US would do nothing to change platform ability to amplify content because US law sees that content as protected speech under the first amendment. The structural solutions like data ownership and transparency would actually have an impact, he argues. We should focus on those interventions and on building alternatives. Imran counters that we need a meaningful sanction, including pulling section 230.

Kate leaves us with the concern that none of the levers we reach for actually work, which leaves us struggling for easy solutions, instead of the massive transformations we need.

The post Govern or Be Governed: What are the Threats to Freedom of Speech? appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-what-are-the-threats-to-freedom-of-speech/feed/ 0
Govern or Be Governed: Protecting Creativity is Protecting Democracy https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-protecting-creativity-is-protecting-democracy/ https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-protecting-creativity-is-protecting-democracy/#respond Thu, 23 Oct 2025 16:34:04 +0000 https://ethanzuckerman.com/?p=6791 Baroness Beeban Kidron, filmmaker and member of UK House of Lords, joins the Govern or Be Governed conference via video from London. She notes that we are in a generation of “supercharged” scraping, that has sent rightholders alight. In losing copyright, she argues, we see an existential threat to creators who might lose the ability… Read More »Govern or Be Governed: Protecting Creativity is Protecting Democracy

The post Govern or Be Governed: Protecting Creativity is Protecting Democracy appeared first on Ethan Zuckerman.

]]>
Baroness Beeban Kidron, filmmaker and member of UK House of Lords, joins the Govern or Be Governed conference via video from London. She notes that we are in a generation of “supercharged” scraping, that has sent rightholders alight. In losing copyright, she argues, we see an existential threat to creators who might lose the ability to earn a living, and to lose control over the meaning of their works.

Baroness Kidron via zoom at Govern or Be Governed

The UK government, she argues, has been to sacrifice the rights of creators to the desires of US AI companies. Along with Kate Bush, Elton John and Paul McCartney, she’s organized resistance to this legislation, and she assures us that “hostilities will soon resume”. That said, copyright is not the most urgent issue in tech governance, but it stands as a symbol for the rest of these issues. The stealing of the labor of creators, the repacking and selling it back to us threatens to kill creativity as an industry. In the UK, creative industries are the second most productive sector, and the source of a great deal of soft power.

I’m not against innovation, Kidron offers, but she suggests that there’s a difference between learning from art and looting it. The creative industries are not standing in the way of innovation – they ARE innovation. The creative industries have been imagining futures and steering us towards empowering visions. “The meaning of art is not a phrase for the pretentious or the privileged” – it’s an economically vital industry that gives £120b to the UK industry a year. Above that, it is the way we understand ourselves and each other. “Creativity is the mirror in which society sees its reflection.” Seeing that as a strip mine for machine learning, a set of resources that can transfer value from those who make culture to those who monetize it, is a transformation we must fight.

“Machines can imitate style, but they cannot produce meaning.” A world in which everything is copied but nothing is created is a world we have to avoid. The idea of technological inevitability is driving this field forward – we need to resist the idea that creativity is a nostalgic relic and accept that the creative industries are a real and powerful sector in a contemporary economy.

“When art becomes data and artists become invisible, we risk losing sight of what creativity actually is: a conversation between human beings based on empathy and understanding.” Respect for creators must be the foundation: without the author, no story, without the composer, no soundtrack. Being AI ready means being arts ready, respecting the creativity that has brought this world about.

The fight ahead will be about the very infrastructure of the digital world ahead. Who gets to sway public opinion, who is the collateral damage? Tech has learned to play democracy, but it captures the infrastructures beneath our feet, she says. We experience a perilous dependency on these platforms and tools, and the battle for creativity is a battle for democracy itself.

The post Govern or Be Governed: Protecting Creativity is Protecting Democracy appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-protecting-creativity-is-protecting-democracy/feed/ 0
Frank McCourt on the value of personal data at Govern or be Governed https://ethanzuckerman.com/2025/10/23/frank-mccourt-on-the-value-of-personal-data-at-govern-or-be-governed/ https://ethanzuckerman.com/2025/10/23/frank-mccourt-on-the-value-of-personal-data-at-govern-or-be-governed/#respond Thu, 23 Oct 2025 15:47:31 +0000 https://ethanzuckerman.com/?p=6789 Frank McCourt is a real estate magnate turned sports team owner, and now a digital freedom advocate. In 2021, he founded Project Liberty, an initiative to rethink social media as a public good. In 2024, he published a book on that topic, Our Biggest Fight: Reclaiming Liberty, Humanity and Dignity in the Digital Age. (I’m… Read More »Frank McCourt on the value of personal data at Govern or be Governed

The post Frank McCourt on the value of personal data at Govern or be Governed appeared first on Ethan Zuckerman.

]]>
Frank McCourt is a real estate magnate turned sports team owner, and now a digital freedom advocate. In 2021, he founded Project Liberty, an initiative to rethink social media as a public good. In 2024, he published a book on that topic, Our Biggest Fight: Reclaiming Liberty, Humanity and Dignity in the Digital Age.

(I’m transcribing what McCourt is saying, to the best of my abilities. I don’t endorse anything specific he’s saying here.)

Asked why he’s interested in digital media, McCourt refers to his children – he’s got eight, four older, four younger and notes how different it’s been raising children in the current digital age. He refers to his family background in building highways as connecting him to building future internet infrastructure.

McCourt references the coverage of his divorce in LA media (and controversy over his ownership of the LA Dodgers) as alerting him to the power of the contemporary internet. He saw the internet of that time as becoming “performance based”, focused on likes and clicks. His analogy for his experience was trying to put out a house fire with a small garden hose while thousands of people poured gasoline on it. His response to being dragged in social media was to talk to policymakers about the incentives and structures of social media. After founding a public policy school at Georgetown, he decided that we needed technological interventions as well. His goal is building a technology stack not centered on surveillance, but on public values.

His “aha moment” came from bringing “brilliant technologists” together and inviting them to think what an alternative could be. The internet was just fine, he tells us, until some people figured out that “data was gold” and that they needed to extract as much data as possible. “In social, shopping or search, it was our personal information that gave us incredible insight about us.” These companies, he asserts, know us better than we know ourselves. So why not a protocol that gives us ownership of our personal data?

He references Buckminster Fuller to suggest we not try to fix existing social media, but create a new technology from the ground up to work in a different paradigm. “And now there are 14 million people using it, so there is no question it can scale.” (He acknowledges that in an internet that connects multiple billions of people, that’s pretty small.)

“It’s fine to have GDPR, DSA and so”, but why not have technology that bakes those priorities into the technology, he asks? He’s thoughtful in noting that there are other people in the room building alternative technologies, and he doesn’t assert that he’s got the silver bullet. The problems are well defined, he argues, but we now need to pivot to the solutions. (The moderator references Gander, a Canadian social network designed around an alternate set of values.)

Asked about his failed bid for TikTok, McCourt says that TikTok was never the goal, just an opportunity – bringing a non-surveillant stack to 184m TikTok users (presumably just the US users?) would create momentum for his alternative architecture. McCourt argues that your data is worth something individually, not just in aggregate – the valuation of tech companies proves data is valuable: why aren’t we getting something from it?

Frank McCourt at Govern or Be Governed

Raising $20b suggests that there are people excited by his vision, McCourt says. He notes that the solution on the table from the Trump administration doesn’t comply with US law, and suggests that Chinese technology can influence the opinions of 184 million Americans.

“It’s really hard to change 30 years of entrenched technology.” When tech changes is when space opens and new designs can take hold. We’ve had a certain version of the internet, now a highly centralized, app centric internet – the shift to an “agentic web” is our moment to make a change. “It’s much easier to change the direction of technology” when tech shifts. But it’s a narrow window, he says, and big tech companies are moving so fast because they see this moment as threatening.

“Don’t think of AI and social differently. What was our social graph and personal information exploited in Web2.0. The version in AI will be our AI context, the context we share with an LLM.” (I agree with him on the context point, but will make my case that AI is far more centralizing than social media in my remarks later today.)

“That information should be ours, not the company’s. Just like our social graph information should be ours.” We should be able to move our context from one platform to another.

Asked whether there are enough people who believe in alternatives visions of the internet, McCourt argues that another internet has been built. He references his history helping build RCN, a telco that challenged existing telcos shifting from copper to fiber. By seeing that shift, he argues, he was able to capitalize on the rise of home internet. When RCN was taking off, he notes that people didn’t want to adopt the service because they couldn’t take their phone numbers with themL – why aren’t people up in arms about the need to migrate their social graphs?

Asked what people in this room – policy officials, philanthropic organizations, activists – can do, McCourt asks us to imagine a different paradigm. “It’s just technology, it works a certain way because it’s been designed to work that way. It can work in an entirely different way.” Project Liberty has three different tracks: technology, policy and coalition building. “We just need people to understand what’s at stake and demand it.”

McCourt’s hopeful vision is that our data is worth huge amounts of money: transferring ownership from the tech companies to individuals “could be the largest redistribution of wealth in history, without taxes.” Big Tech doesn’t want us to understand that our data is us, which allows us to be manipulated and challenges our free will. “Our data is our personhood, our digital DNA.” It’s a human right “and also a property right: your data is really valuable.”

The post Frank McCourt on the value of personal data at Govern or be Governed appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/23/frank-mccourt-on-the-value-of-personal-data-at-govern-or-be-governed/feed/ 0
Govern or Be Governed: Can Anyone Regulate Big Tech? https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-can-anyone-regulate-big-tech/ https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-can-anyone-regulate-big-tech/#respond Thu, 23 Oct 2025 14:29:33 +0000 https://ethanzuckerman.com/?p=6786 I’m in Montreal today and tomorrow at a conference titled Attention: Govern or be Governed. I’m speaking later today, along with friends Ivan Sigal and Mark Surman, about the future of technologies for democracy. It’s quite the event: my friend Taylor Owen from McGill has brought together a mix of policymakers (particularly European and Canadian),… Read More »Govern or Be Governed: Can Anyone Regulate Big Tech?

The post Govern or Be Governed: Can Anyone Regulate Big Tech? appeared first on Ethan Zuckerman.

]]>
I’m in Montreal today and tomorrow at a conference titled Attention: Govern or be Governed. I’m speaking later today, along with friends Ivan Sigal and Mark Surman, about the future of technologies for democracy. It’s quite the event: my friend Taylor Owen from McGill has brought together a mix of policymakers (particularly European and Canadian), activists and technologists to talk about this very weird moment in technology and politics.

Taylor warns that there’s an incredible incentive at the moment to lock in power, both of corporations and states. That moment is aggravated by the particular moment of US state power – we’ve never seen a moment where US state power is exerting extreme trade and diplomatic power to protect tech industries and their overreach. The goal is to have a global conversation in Canada, a place that feels like it’s on the front lines of these battles. Canada is facing extreme pressure from the US, including pressure to abandon the digital services tax, and is feeling reluctance to pursue new regulations at a moment where the sovereignty of the nation is at risk. So this becomes an interesting place to reimagine how these technologies can and should be governed.

The opening speaker is EU commissioner Michael McGrath. He’s the Irish Commissioner responsible for democracy, justice, the rule of law and consumer protection, joining the EU Commission after 25 years in Irish politics.

He starts his remarks off with a bang: “Algorithms fuel apathy, public discourse is manipulated, and disinformation spreads like wildfire.” His theory is that technology amplifies power, which means a tool like radio – and by extension, the internet – can be a tool for repression or for liberation. “We do not have a technology problem… Instead, we have a governance problem.” Societies must decide whether they want to govern technology, or be governed by it.


His closing reminder: freedom is not the absence of rules. Rules enable freedom, if they are transparent and accountable.

Key to this is accountability. When citizens report scams, when journalists expose ill effects, when anyone alters an algorithm to their benefit, they are holding tech accountable. But we’re seeing power exert influence through technology, with Russian disinformation actors using social media and AI to manipulate public opinion. This is especially worrisome to the EU, an organization founded in the wake of WWII, an assault on democracy by autocracy.

McGrath outlines three pillars of the EU approach to tech regulation. It’s a risk-based approach, which tries to identify systemic risks and shift the burden of proof to the companies. The DSA and AI Act means “the companies that design these online spaces must also design the safeguards we rely on.” It’s also a user-centric approach, which gives users the ability to appeal content takedowns. While it’s characterized as censorship legislation, he asserts that it’s the opposite – it’s the best regime we’ve seen for asserting the right to speak online. Finally, it seeks transparency, ensuring that media remains independent and people understand who’s paying for the content they see.

In a rare moment of equivocation, McGrath notes that Australia has restricted social media to people 16 and under – he notes it’s a complex issue, that the European Commission is meeting with experts and keeping the rights of children in mind as it considers restrictions.

McGrath introduces the “European Democracy Shield”, an EC effort to protect and invest in EU democracies and media. It’s necessary, he says, for three reasons:

– The transformation of the public sphere, with the concentration of power in large media companies
– The rise of authoritarian actors undermining rights and freedoms, including election interference. He cites Moldova’s experience with election interference as an example of these battles.
– A societal transformation in which young people are increasingly unenchanted with democratic values, sometimes embracing extreme nationalism.

To protect European freedom, which he argues is well-understood through high rankings on metrics like press freedom, we need a shield. “Democracy itself is the shield. We do not protect democracy for reasons of abstract ideals – we protect democracy because it protects us.”

The shield seeks to reinforce the integrity of the information space, to ensure elections are free from interference, and to create democratic resilience, counterattacking against disinformation, and seeking to strengthen public trust in democracy.

McGrath is followed by a panel of individuals who work day to day on the information ecosystem. Sasha Havlicek of the Institute for Strategic Dialog notes that social media platforms systemically amplify antidemocratic speech, and have started blocking access to the information we need to study these effects. While we’ve made progress on speak-protective ruls, Havilchek argues, the solution is transparency. If you believe social media is biased one way or another, it is in your interest to ensure that there is data that allows us to investigate these claims. We need more legislation like DSA 40, and we need processes that allow people to appeal suspensions and restrictions on their speech.

European MP Alexandra Geese notes that the Digital Services Act is an impressive piece of legislation, but is not being enforced sufficiently. There is an article in the DSA about systemic risk to minors – why aren’t we enforcing that provision, and instead trying to pass more legislation around minors specifically? This is especially true, she argues, regarding systemic risks to democracy. It’s harder than ever to find content that is not extreme or antidemocratic. What’s the agenda-setting that’s coming about from algorithms that appear to be amplifying extreme political content?

Why isn’t the EC enforcing the DSA? In part, it’s because JD Vance has threatened to pull out of NATO if the DSA has real consequences for US tech companies.

Sir Jeremy Wright, former UK Secretary of State for Digital, Cultural, Media and Sport, offers and apology for the ways in which legislation falls short of goals: we never get entirely what we want, due to the compromises of the legislative process. But we appear to be passing “framework legislation”, and leaving most of the work to the regulators. Right now, the Trump administration threats are having a great deal of impact on what regulators do. “But we still do have power to act,” though more progress is possible. As we move forward the goal needs to be passing legislation that is actually implementable.

The post Govern or Be Governed: Can Anyone Regulate Big Tech? appeared first on Ethan Zuckerman.

]]>
https://ethanzuckerman.com/2025/10/23/govern-or-be-governed-can-anyone-regulate-big-tech/feed/ 0