Megan McArdle once again joins EconTalk host Russ Roberts to discuss the dangers of the left-leaning bias of Google’s AI to speech and democracy, if such a thing as unbiased information can exist, and how answers without regard for social compliance create nuance and facilitate healthy debate and interaction. McArdle is a columnist for The Washington Post, and is the author of The Upside of Down: Why Failing Well is the Key to Success.
Often times when the dangers of AI are discussed, apocalyptic scenarios of human subjugation or extermination, such as Terminator, I Have No Mouth and I Must Scream, and 2001: A Space Odyssey are thought of. More realistically, some are concerned about the potential of AI to destroy jobs and dilute the meaning of art or make plagiarism easier. AI chatbots and companions are becoming mainstream- for instance, the main topic of this podcast: Google’s Gemini. McArdle’s concern surrounds how the left-wing bias of the companies at the forefront of AI development has bled into their creations, and the tangential impact on American free speech and democracy.
McArdle’s initial examples of this are the factual inaccuracies of Gemini in the name of affirming a socially respectable position. The first case discussed involves Gemini’s artistic portrayal of the founding fathers, and other ethnically European historical figures, as nonwhite, which she admits is trivial. However, Gemini’s responses to text queries about gender affirming care were far more notable to McArdle.
McArdle notes that there are reasons for this that don’t involve bias, such as training the chatbot on social media sites such as Reddit, whose moderation leans left. Additionally, the AI doesn’t have the ability to detect certain limits to logical positions, and social rules. But this is an issue for McArdle as well. She sees this as indicative of speech suppression, where only one side of the political spectrum is allowed to be praised, and the other is only allowed to be demonized. Her example of this Gemini’s refusal to giving praise to right-wing figures like Brian Kemp, while doing the opposite for more controversial left-wing figures like Ilhan Omar. The danger in this to McArdle is that AI will teach people not to think in a complex manner and will answer queries analytically to keep the questioner in their ideological bubble.
In response to this, Roberts asks a fantastic question. Since search engines, in order to be useful, are discriminatory or biased by their very definition, what could an unbiased Google, or Gemini possibly mean? This question prompts Roberts to proclaim his pessimism, as the problem is larger than AI chatbots, and is centered around the very ideal of an unbiased search engine. This teaches people to not to decipher truth from varying information, instead people rely on the results they’re given, particularly those which align with their biases. This has a further detrimental trickle-down effect to democracy. To summarize, since search engines are biased by their nature, Roberts’ solution comes from users of search engines behaving more carefully and attentively.
McArdle proposes a similar solution: A proliferation of people focusing less on social appeasement when difficult social questions are being answered, and more on finding nuance. Understanding the complexity within addressing problems like racial inequality is the root to finding solutions.
To contrast Roberts’ pessimism, McArdle gives reason for the best-case scenario. She believes that negatives are sure to come from AI, similarly to how social media led to cancel culture. But human decency will triumph over this challenge. McArdle’s point is one defending liberal society. She views the attempts to fundamentally shift the social order away from enlightenment principles as failures, and the new social order attempting to shift the window of acceptable views seen by Google’s left-wing bias will fail as well. The spirit of human connection and conversation is strong enough to maintain productive discourse.
Although this was a fascinating conversation, I finished the podcast unconvinced by McArdle that AI bias was a meaningful issue. At multiple points throughout the podcast, she would discuss a response from Gemini that displayed clear left-wing bias, and then go on to state that Google fixed the issue very quickly, even the same day. For example, she mentioned that Gemini does not say mastectomies are partially reversible anymore.
Drawing from this, it seems like Google has a set of values it wants their AI to embody, and they’re just working out the kinks. Furthermore, the leap McArdle takes from this is drastic to say the least, “We are now saying that you can’t have arguments about the most contentious and central issues that society is facing.” Social media bias against right-wing people has been shown to be an unfounded claim and is by far not the biggest threat to free speech. There is a far better argument for social media companies failing to adequately regulate disinformation and false claims about vaccines and the 2020 election or inability to take action against harassment or right-wing extremism coming from their platforms.
Similarly, to cancel culture, this is an overblown concern. The better place to focus in the pursuit of preserving free speech and expression does not come from social media companies banning people for hate speech. It comes in state legislatures banning forms of LGBTQ+ expression, such as drag, and Project 2025’s totalitarian and Christian nationalist aims to restrict speech contrary to conservative principles. This is far more important than Gemini refusing to write a love poem for Brian Kemp. Freedom is under attack in America, but it predominately comes from the far right, not Silicon Valley.
McArdle’s argument also begs the question of to what extent corporations are responsible to entities other than their shareholders. Are fossil fuel companies obligated to shift their energy production to green sources in order to slow climate change? What about corporations’ responsibility to pay their workers a living wage even if it’s above equilibrium? Are building developers, such as those of the Grenfell Tower, responsible for installing sprinkler systems or building with safer materials, even if it is more expensive? If Silicon Valley is socially responsible to uphold the public square and the spirit of free speech, even if it negatively impacts their shareholders, then this principle of social responsibility should be expanded to all areas of corporate activities.
Related EconTalk Episodes:
Megan McArdle on Internet Shaming and Online Mobs
Ian Leslie on Being Human in the Age of AI
Can Artificial Intelligence be Moral? With Paul Bloom
Zvi Mowshowitz on AI and the Dial of Progress
Marc Andreesen on Why AI Will Save the World
Related Content:
Megan McArdle on Catastrophes and the Pandemic, EconTalk
Megan McArdle on the Oedipus Trap, EconTalk
Megan McArdle on Belonging, Home, and National Identity, EconTalk
Akshaya Kamalnath’s Social Movements, Diversity, and Corporate Short-termism, at Econlib
Jonathan Rauch on Cancel Culture and Free Speech, The Great Antidote Podcast
Lilla Nora Kiss’ Monitoring Social Media at Law & Liberty
READER COMMENTS
Peter Gerdes
Jun 13 2024 at 9:18am
Just because it’s hard to define unbiased doesn’t mean it’s not meaningful but I think the problem here is the conflation of several different senses of unbiased.
When it comes to search engines and especially AI we don’t want them to be unbiased in the 1a sense of being viewpoint nuetral but unbiased in the same way that a helpful librarian is unbiased or that a course in college can have an unbiased investigation of various religious beliefs.
What that means isn’t that no judgements of plausibility or relevance are made but that there is no attempt to persuade by hiding information or arguments. If you ask for information on how to treat cancer the librarian might point you first to the books on modern scientific medicine but she won’t try to hide that there are homeopaths who have other ideas and will help you find the best arguments and information on homeopathy if that’s what you ask for.
Maybe unbiased is the wrong word and the better description is something more like helpful and non-judgemental regardless of what position you take. Basically people want the same thing out of AI they want out of a human assistant — the sense that they are being empowered and helped to make judgements rather than pressured to assent to certain claims.
Peter Gerdes
Jun 13 2024 at 9:33am
One really underappreciated aspect of AI is that it’s going to enable people for the first time to separate a conclusion from the context.
One of the fundamental problems in political discussion or in epistemics generally is that people can’t judge how they would have answered the same question in a different context. If you didn’t know that this position on the meaning of the word “woman” supported such and such conclusion or didn’t know that it was Trump rather than Biden being prosecuted for state crimes on a novel but plausible legal theory with significant evidence it was pursued because of voter demands would you have reached the same conclusion.
LLMs and likely other forms of AI will, for the first time, give us the ability to ask these questions or at least versions of them. We can roll back to prior versions of a model (eg say one trained to imitate democratic commenters on 2016 data) and see what it says when we feed it information with the partisan affiliations swapped. And we can probably mask out certain connections in other ways.
In the optimistic scenario this works to reduce the amplification of tribalism we see online.
Comments are closed.