The conversation around AI often swings between utopian dreams and dystopian nightmares. As someone who has worked with executives and founders across various industries, I’ve witnessed this pendulum swing firsthand. However, after delving deeper into this topic, I’ve arrived at a simple yet powerful conclusion: AI is merely a tool. Like any tool, its impact depends entirely on who wields it and how.
This insight became crystal clear during a recent discussion with Bruce Schneier, a leading thinker on technology and democracy. His upcoming book “Rewiring Democracy” explores how AI will transform our politics, government, and citizenship. What struck me most was his framing of the issue: AI is a power-enhancing tool. To the extent that it empowers citizens, it’s beneficial for democracy. To the extent that it makes the already powerful even more powerful, it’s detrimental to democracy.
This power dynamic is the key to understanding AI’s role in our democratic future. We’re already seeing examples on both sides of this equation.
How AI Can Strengthen Democracy
When AI tools help everyday people run for local office—where there’s typically no money and no staff—that’s democratizing power. When these technologies increase access to justice by making legal help more accessible to those with only public defenders, that’s leveling the playing field. When AI assists unions or citizen groups in organizing and engaging with local governments, that empowers individuals.
Around the world, governments are experimenting with AI in ways that benefit democracy:
- In Singapore, AI systems are making government services more accessible
- France is using AI to improve citizen participation in decision-making
- Chile and Switzerland are implementing AI tools that increase transparency
These examples show that AI can indeed strengthen democratic processes when implemented thoughtfully.
View this post on Instagram
The Dark Side: How AI Can Undermine Democracy
But there’s another side to this story. The same technologies can be used to find loopholes in laws or tax codes—something we could all theoretically do, but which disproportionately benefits the wealthy. AI can amplify misinformation and polarization, particularly when combined with social media business models that prioritize engagement over user satisfaction.
As Bruce pointed out, these platforms have discovered that getting users enraged gets them engaged. This has led to what he calls the “politics of sport“—my team good, your team bad—which undermines thoughtful democratic discourse.
I’ve seen this firsthand. Recently, I was discussing with some university students how they’re bombarded with negative messaging: that certain groups are causing their problems, that there’s no hope for their future, and that the job market won’t be there for them. Much of this content is explicitly AI-generated to create division and despair.
Rethinking Our Systems
What fascinates me most is the bigger question this raises about our fundamental systems. Both capitalism and democracy are information systems that leverage conflict to solve problems. This made sense in the mid-1700s when these systems were invented, but in our 21st-century information world, it’s worth asking if they’re still fit for purpose in their original form.
The cost of coordination has become cheaper—consider massive international companies that operate as top-down economies. Meanwhile, the cost of conflict has gotten more expensive—consider the astronomical cost of U.S. elections. The ratio of conflict versus cooperation that was optimal in 1750 might not be the optimal ratio today.
This reminds me of Marshall Goldsmith’s book title: “What Got You Here Won’t Get You There.” When was the last time we truly rethought our society and the world around us? Perhaps it’s time.
The Choice Is Ours
The most important takeaway from my exploration of this topic is that we have agency. AI doesn’t determine our future—we do. We can decide whether surveillance and manipulation are valid business models. We can choose whether AI serves the many or the few.
As we stand at this crossroads, I believe we need to approach AI governance with both optimism and vigilance. The technology itself isn’t inherently good or bad—it’s how we design, deploy, and regulate it that matters.
The future of democracy in the age of AI isn’t predetermined. It’s being written right now, by all of us. And that’s both a tremendous responsibility and an extraordinary opportunity.