What can AI do for democracy?

Collective Intelligence has the ability to turn AI's inherent risks in opportunities to strengthen democracy.

AI is all the rage. Quite literally: it’s said it could destroy jobs, upend democracies, and take over human affairs. But is the outlook entirely bleak? Could algorithms also be harnessed to make governments more democratic and societies smarter? What card does democracy have to play when it comes to using AI ethically and effectively? How can human collective intelligence provide the necessary moral guidance to AI?

AI is, of course, a threat to democracy and a challenge for governments

Type the words “Artificial Intelligence” into your internet search engine for the latest news on the topic, and you’ll get these days a load of articles regarding huge investment deals, a fair amount regarding the harm AI can do, and the odd article about the regulation of AI by governments (e.g. the EU AI Act). The overall messages that we’re hearing are loud and clear: a few people will make sh*** loads of money. AI will wreck the lives for most people. It’s high time democratic actors take the lead in shaping AI.

We are indeed well aware of the concerns regarding AI and public governance. The rest of this article should not suggest in any way that the risks are not very real, immediate, and scary. It’s already a race, innovation is outpacing our ability to handle misinformation and other negative impacts of AI.

AI presents a wide range of threats to democracy

In Smarter Together’s February 24, 2024 webinar, the following threats were identified. It’s only a cursory list, but it is meant to acknowledge that we are all aware of the real concerns around AI impacts to public governance.

  • Privacy and surveillance: AI can enhance governments’ surveillance capabilities, raising concerns about privacy infringement and the potential for misuse of personal data.
  • Attacks from autocratic tendencies and regimes: We already see AI being used to interfere with elections, through deep fake videos for instance. We also see that democratic governments may struggle to keep pace with the rapid advancement of AI technologies, leading to regulatory gaps that could result in inadequate oversight and accountability, and slower deployment leading to diminished capabilities. In the meantime, autocratic regimes may feel less hindered by the need to regulate the use of AI and use it against their population or other nations to exert greater power, putting democracies under further pressure. Also, the development and deployment of AI technologies raise geopolitical concerns regarding competition for dominance in AI capabilities. If democracies don’t cooperate to address shared challenges, they could fall behind.
  • Bias and discrimination: AI algorithms may inherit racial, gender or other types of biases from the data they are trained on. When used by public authorities, this can lead to discriminatory outcomes in areas such as law enforcement, hiring practices, and social services.
  • Job displacement and social hardships: Automation driven by AI technologies threaten to provoke significant job displacement in various sectors, requiring government intervention to address unemployment and retraining needs. This in turn could also put further pressure on governments and challenge their legitimacy. And even when AI may not replace jobs, we’re starting to learn that it may make people unhappy at work, as recently emerged in a large survey conducted by the Institute for the Future of Work.
  • Security risks: AI systems are vulnerable to cyberattacks and manipulation, posing great security risks to critical infrastructure and government operations.
  • Unethical use of AI in warfare: Concerns exist regarding the development and deployment of AI-powered weapons systems, raising questions about their ethical use and the potential for autonomous decision-making in warfare.
  • Transparency and accountability: The use of opaque AI algorithms in public decision-making processes by governments raises concerns about accountability, fairness, and the delegation of authority to non-human entities. This could make it difficult to understand and challenge government decisions, further undermining trust in government. Governments need to ensure that AI development adheres to ethical standards and principles, such as fairness, transparency, accountability, and inclusivity, to mitigate potential harm to society and reassure distrustful citizens.

Yet, all is not bleak, and we can decide to organize ourselves to seize the right opportunities.

The Artificial Intelligence genie has come out of the bottle. Will it be good for democracy?

It’s as if we had found a new lamp, rubbed it, and a genie had come out that’s incredibly hard to control. Now we have no choice but to learn to master it. Where do we stand?

Governments are already using Artificial Intelligence, putting it to ‘good’ and ‘bad’ use

Already over a decade ago, IBM’s AI assistant Watson beat the best human “Jeopardy!” player ever. The day after, IBM announced that it was “exploring ways to apply Watson skills to the rich, varied language of health care, finance, law and academia.” In the upcoming years, IBM invested significant sums of money in promoting Watson as a helpful digital assistant for hospitals, farms, offices, and factories, aiming for its benevolent impact. Watson even made the news headlines during the 2016 US presidential election, when IBM suggested that we could leverage Watson’s capabilities to analyse vast amounts of data, including social media sentiments and voter opinions, to aid political campaigns in understanding and addressing voter concerns more effectively. Additionally, Watson’s cognitive computing abilities were highlighted as potentially valuable for informing campaign strategies and decision-making processes.

While the announcement may have generated quite a buzz, it was clearly premature in light of the tools’ abilities at the time. Away largely from the media’s attention, authorities around the world have nevertheless started making use of AI in all realms of public administration and around the world over the past decade. The range of applications is already huge. For example,

  • Security: In Switzerland, a system predicts the areas most likely to be burglarised thanks to software developed by ETH Zurich, the city’s technical institute.
  • Public finances: France’s ministry for public finance has been using since 2021 AI software to detect undeclared and illegal buildings and swimming pools, helping it recover significant sums of money.
  • Environment: Today, on all continents, AI helps optimise airline traffic to reduce emissions in the skies over Europe, detect earthquakes, analyse the health of buildings, anticipate pollution peaks, etc.
  • Courts: In China, an AI assists the people’s court of Shanghai Pudong since 2020. Based on a verbal description of the case, machines are capable of offering a sentence that is deemed accurate over 97 percent of the time for 8 types of crimes.


This is just a start, and there are many more applications emerging. From this very small list, we can already foresee however that the range of applications in the field of public affairs is huge and bound to increase. Some applications will clearly be more benevolent than others, and some more prone to the risks of bias, lack of transparency and suspicion that we mentioned before than others.

If AI has the potential to make government much better informed, that data can of course be put to good use. AI can enable what specialists refer to as “living evidence”, which is the use of continuous evidence surveillance and rapid response pathways to incorporate new relevant evidence into systematic reviews. This can help public authorities predict, analyze and adjust public policies much more effectively. If they are governed democratically, that enhancement can be harnessed for democratic goals.

This post’s proposition is not to dwell on those threats, but to consider whether democracy has a card to play in tapping into the potential that AI has to make governance smarter, in conjunction with a better understanding and mastery of Collective Intelligence, as well as of Emotional Intelligence. The genie is out of the bottle, so how can we tame it for our benefit?

Definitions:

  • Intelligence: The ability to acquire and apply knowledge, solve problems, adapt to new situations, reason abstractly, and learn from experiences, resulting in effective decision-making and problem-solving capabilities.
  • Artificial Intelligence (AI): Refers to the simulation of human intelligence processes by machines, typically computer systems, enabling them to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
  • Collective intelligence (CI): The ability of groups to outperform individuals in learning, decision-making, and problem solving. It is an evolutionary adaptation common to many species, from ants to spider monkeys, all use CI to survive. We as humans are unique though, in that our collective intelligence depends largely on culture, the person-to-person transfer of human-generated knowledge, rules and behaviours.
  • Emotional Intelligence (EI): The capacity to be aware of, take into consideration, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically.
  • Collective Superintelligence: The ability to use AI to connect human groups into real-time super intelligent systems.
  • Conversational Swarm Intelligence (CSI): Increased decision-making accuracy of networked human groups through natural conversational deliberations.

Artificial Intelligence, Collective Intelligence and Emotional Intelligence can work together

According to Dr. Stefaan Verhulst, “CI, AI, and EI have different strengths and weaknesses as it relates to processing and translating into decisions.” It is therefore helpful to keep in mind their relative complementarities, with AI helping support CI in particular on three levels.

One of the weaknesses of Collective Intelligence is that it’s hard to scale it up if we want to achieve collective intelligence platforms that are meaningful and seen as legitimate. Artificial Intelligence can help deal with some of those challenges. It can help accelerate CI and make it more agile, for instance by helping make sense of large public consultations. A recent experiment for Brigham Young University showed how AI can enable better conversations by suggesting reformulations that improved the perceived quality of the debate.

AI can also help make Collective Intelligence processes fairer, according to David Mas, Chief AI officer at Make.org, in the sense that it can enable easier and better access to information for a wider variety of participants in collaborative processes. David insisted during our webinar that “It can be difficult for average citizens to read everything and follow information written in a technical language. Make.org with the French Economic, Environmental and Social Committee (CESE) developed a system capable of providing digestible information on a large scale to citizens.

Make.org developed with the CESE a platform to access all relevant societal debates in France,” thus lowering the entry barrier for participants. AI assistants can also help participants write their contribution, which can also lower the barrier to entry to contribute to the public debate, creating a safe space where there are no stupid questions and where AI helps citizens to generate useful contributions. Make.org used this tool for France’s recent Citizen Panel on End of Life.

Finally, one can use AI to document what others are thinking of and enhance their ability for social mirroring. Social mirroring is crucial to self-reflection, as it is the ability to take into consideration a peer’s interpretation of a specific experience. The GovLab created exercises in social mirroring in citizens assemblies to understand people’s different perspectives on the use of data for COVID, providing insights on whether AI helped make individuals more empathetic. By using AI to confront people with alternative points of views, AI may have the potential to help fight polarization and get those participants in citizen deliberations more empathetic to others’ viewpoints.

That empathy for others can also have a downside. There is indeed a risk of anthropomorphic engagement with the technology, of becoming too close to the device, thus leading to less social behavior because of anthropomorphic reflection despite the fact that it’s just a tool. This has been shown for instance with the use of chatbots for mental health support, which stress the need to be supported by trained therapists.

AI and government-citizen interactions

AI can enable large-scale citizen deliberation, helping citizen inputs process faster, correcting human mistakes, and addressing human distrust. Vice-versa, AI can benefit from properly tapping into collective intelligence, helping regulate it better, integrate ethics into its management, increase the transparency of algorithms, in combination with crowdsourcing.

Thinkscape is for instance a company that’s working on this promise. It enables groups of any size (tens to thousands), using Swarm AI, to hold productive real-time conversations that optimise group insights and amplify collective intelligence

AI can also recognize patterns, nuances, and anomalies indicative of hate speech, disinformation, and extremist propaganda and counter those on social media, if we put them to good use, providing speed and scale in this difficult task.

FARI, the institute for AI and the common good, has conducted research showing that while people expect that using AI will lead people astray from solutions that serve wider interests, in fact the delegation of certain tasks to artificial agents fosters prosocial behaviours in collective risk dilemmas.

Other recent examples of AI usages include Stanford’s Deliberative Platform using AI to moderate large-scale online deliberations, or Common Ground, an application for hosting iterative, open-ended and conversational democratic engagements with large numbers of participants, using openAI’s technology.

A key advantage of using Artificial Intelligence in those contexts is the ability to fight astroturfing that blights many public consultations. Interest groups, in particular very strident ones that are capable of mobilizing their grassroots or have the resources to create fake accounts, can indeed pollute open public consultations by generating lots of entries in response to a government consultation. AI can help identify disinformation and such capture in public consultations and therefore make public consultation more equitable and trustworthy.

Reading suggestions

Three chapters in The Routledge Handbook of Collective Intelligence for Democracy & Governance document how, already in its early stages, AI is starting to be put to good use in collaborative processes. We’re witnessing some first attempts to integrate machines into deliberative processes, the crowdsourcing of ideas, and public information. This potential will only grow with time. It’s worth highlighting those benefits:

1. In Prof. James Fishkin and Dr. Alice Siu’s chapter on AI-assisted moderation on deliberative platforms, we learn that an AI-assisted platform can enable quality online deliberations.

2. In Dr. Carina Antonia Hallin and Naima Lipka’s review of the Slagelse municipality in Denmark supported, we see how Natural Language Processing is being used to manage large amounts of citizen inputs.

3. In Cris Ferri’s case relating how the Parliament of Brazil implemented an AI-chatbot, we discover how an automated dialogue system can help handle citizen questions.

AI can make authoritarian regimes more stupid and democratic regimes smarter

Many tools that AI can provide will be used by unscrupulous governments for the purposes of increased surveillance, social control, censorship, policing, propaganda, disinformation, controlling the development of AI itself. One can expect, and already see, that authoritarian regimes will use AI unencumbered by ethical standards.

AI is fundamentally just a tool. Just like any tool, it can be used ethically or unethically. The ethics does not pertain to the tool itself, but the hand that wields it. As such, AI is an amplifier of humans’ competence and ethics. It can amplify them one way or the other. It can distort them or be applied to good use. You can bang your finger with a hammer, or you can use hammers to smash shop windows. Or you can simply fix things.

The metaphor only goes so far of course, as the scale of AI’s impact is far greater. AI’s ability to amplify the consequences of human decisions are therefore far greater.

As an extension of our intelligence and ethics, both collective or individual, AI can be just as much a friend of democracy’s propensity to wield greater collective wisdom, as an amplifier of authoritarian regime’s tendency to make mistakes.

As The Routledge Handbook of Collective Intelligence for Democracy and Governance stresses, we can group together 6 broad types of drivers of Collective Intelligence. The sixth is the ability to use our environment and tools to extend our intelligence beyond our natural means. Artificial Intelligence, as a tool, belongs to that category. It can, if properly used, lead to greater intelligence. If improperly used, with proper checks, it can lead to amplifying group think.

When using AI to control people’s thoughts, censor information, discredit opponents through fake videos, etc, AI can quell people’s willingness to dissent, driving greater group think. AI may also significantly reduce people’s sense of psychological safety. Belarussian artist Vladislav Bokhan has become famous for tricking Russian teachers in showing obedience to the point of absurdity. He routinely tricks schools in Russia into organizing whimsical patriotic events, illustrating submission to dictatorship.

Autocracies are fertile ground for not questioning AI used by public authorities. Less dissent, less psychological safety, more group think, all in all, this will most likely lead authoritarian regimes to take bad decisions that have greater consequences faster.

Putting it differently, AI will increase authoritarian regimes’ likelihood, probabilistically, of being stupid. On the contrary democracy means, requires and allows tapping into the key drivers of greater CI (diversity, information, deliberation…) and EI (psychological safety, dissent…), thus reinforcing democracy’s propensity, over time, to make better decisions, also with AI.

Our central proposition is thus the following: there must be a way of aligning the benefits and strengths of democracy, AI, CI and EI that allows democracy to make fewer mistakes and be generally smarter.

Now how do we harness the powers of the AI genie for democracy?

By being smarter together.
Governments and public authorities can better integrate AI and CI + EI into their governance processes to produce smarter public decisions and resolve public problems more effectively. This means at least taking the following six steps:

Embrace a proactive outlook

First, we need to realise this potential and not be automatically defeatist about the future. One should certainly give up on one thing, which is the notion that AI might disappear. AI is already a part of public administration today in different ways. Its role will keep growing. However, doomsday scenarios should not distract us from understanding where democracy’s strengths lie. An alignment of Artificial Intelligence, Collective Intelligence and democracy is possible only if we understand that this scenario is possible and if we cultivate this potential.

2

Promote stronger governance frameworks and address power asymmetries

We should then, concurrently, keep watching things and developing clear data governance frameworks, regulation and ethical guidelines for AI use ensuring data privacy, security, and transparency. This involves in particular addressing the power asymmetry between those who have access to the necessary tools and the vasts amounts of required data to make the most of AI, and those who don’t. “Many,” warns Verhulst “are going to be left behind, and this beyond the North-South divide, but also within countries, for social exclusion reasons.” By tackling those, we can ensure that AI helps bridge, not deepen social exclusion.

Invest in the public sector’s infrastructure and knowledge

Over the long term, governments should invest in building robust AI infrastructure, including data infrastructure, computing resources, and AI talent, to leverage AI tools and technologies effectively in governance processes. All public authorities should invest in training programs for public officials to build AI + CI + EI literacy and skills.

Make government data more accessible to the public and facilitate innovation in AI applications for public good

Open data can also enable greater transparency and accountability in governance processes.

Engage citizens in decision-making processes

Participatory mechanisms such as crowdsourcing, citizen consultations, and deliberative forums are to be promoted. The EU has consulted a panel of EU citizens on AI. This should be recurrent, as the consequences and potential of reinventing public governance thanks to Artificial Intelligence and Collective Intelligence unfold.

Adopt and encourage an innovative and learning approach to governance

Policy makers should seek to promote greater openness and cognitive diversity. That would allow to constantly learn from the impact of AI applications on governance processes and public outcomes. This will call for great flexibility and responsiveness… and greater Collective Intelligence, in terms of analysis, memory, evaluation and decision-making.

Table of Contents

Related:

Featured story

Handbook of Collective Intelligence for Democracy and Governance

Featured story

Understanding the levers of public opinion: deliberative polling and AI assisted online deliberation

Featured story

Citizen sourcing local health policy with AI assisted idea curation

Featured story

More legitimate and impactful policy: how Gentofte enhanced dialogue between citizens and politicians

Inspiring case studies on Collective Intelligence and governance

open access handbook for change makers

Handbook of Collective Intelligence for Democracy and Governance

Join our newsletter

Every 2 weeks, you’ll get: