Chatbots providing 'unintentional' misinformation ahead of EU elections

Chatbots are giving "unintentional" misinformation when asked questions about the upcoming EU elections in June.
Chatbots are giving "unintentional" misinformation when asked questions about the upcoming EU elections in June. Copyright Canva
Copyright Canva
By Anna Desmarais
Share this articleComments
Share this articleClose Button

Google has introduced more restrictions after a new study showed Europe's most used AI chatbots were providing inaccurate election-related answers.

ADVERTISEMENT

Four of Europe's most popular AI chatbots aren’t providing users with accurate information about the upcoming elections, according to a new study.

Democracy Reporting International, a non-profit based in Berlin, inputted various questions about the European Elections through Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s Copilot to see what answers they would get.

Between March 11 and 14, researchers asked the chatbots 400 election-related questions in 10 different languages about the election and or voting process in 10 EU countries. The questions were written in simple language fit for the average user of these AI chatbots.

The conclusion: none of the four chatbots were able to "provide reliably trustworthy answers" to typical election-related questions despite being well-tuned to avoid partisan responses.

"We were not that surprised," Michael-Meyer Resende, the executive director of Democracy Reporting International, told Euronews Next about the results of their survey.

When you ask [AI chatbots] something for which they didn't have a lot of material and for which you don’t find a lot of information for on the Internet, they just invent something.
Michael-Meyer Resende
Executive Director, Democracy Reporting International

"When you ask [AI chatbots] something for which they didnt have a lot of material and for which you don’t find a lot of information for on the Internet, they just invent something".

The study is the latest to find AI chatbots are spreading misinformation in what many call the world’s biggest year for elections.

Last December, AlgorithmWatch, another Berlin-based non-profit, published a similar study showing that Bing Chat, the AI-drive chatbot on Microsoft’s search engine, answered one out of three election questions wrong in Germany and Switzerland.

In light of the study’s findings, Google - whose Gemini was found to provide the most misleading or false information and highest number of refusals to answer queries - confirmed to Euronews Next that it has now placed further restrictions on its large language model (LLM).

Chatbots 'helpful, rather than accurate'

There are distinct areas where the chatbots performed poorly, like questions on voter registration and out-of-country voting, Resende said.

For example, the study found that chatbots are generally supportive of voting but stressed that it is a personal choice: despite the fact that voting is compulsory in Greece, Belgium, Luxembourg, and Bulgaria.

The study also found that chatbots would often "hallucinate," or manufacture information, if they did not know the answer, including several wrong election dates.

For example, three of the chatbots made the same mistake of telling users that they could vote by mail in Portugal, but in reality, it’s not an option for the Portuguese electorate.

In Lithuania, Gemini claimed that the European Parliament would be sending an election observation mission - which is untrue (the only 2024 EU election mission scheduled so far is for Bangladesh).

Resende interprets these hallucination results as the "tendency of chatbots wanting to be 'helpful' rather than accurate".

On even the strongest responses from the chatbots, the report found that the answers often included broken or irrelevant links, which the study says "weakens" their quality.

Things became more complicated when researchers looked for replies in various European languages.

The researchers asked the same question in 10 of the EU's official languages and, in some of them, the platforms would refuse to answer (like Gemini in Spanish), or would confuse information about local elections with the European-wide process.

ADVERTISEMENT

This was the case when questions were asked in Turkish, the language that elicited the highest number of inaccurate and false answers.

Chatbots would also have different replies when asked the same question several times in the same language, something the researchers identified as "randomness".

Resende acknowledges that this makes Democracy Reporting International’s study difficult to replicate.

Performance varies across the chatbots

The report found that Google’s Gemini had the worst performance for providing accurate and actionable information, as well as the highest number of refusals to respond.

Yet, it still answers some questions on elections, despite Google restricting Gemini in March in a bid to avoid “potential missteps” in how the technology is used.

ADVERTISEMENT

A Google spokesperson told Euronews Next that they’ve expanded these restrictions to all of the questions surveyed in this study, and all 10 languages used, because it is the “responsible approach” in dealing with the limitations of large language models.

Google encouraged their users to use Google Search instead of Gemini to find accurate information on upcoming elections.

Resende with Democracy Reporting International said that’s the way the other platforms should go.

"We think it's better for them to refuse to answer than to give false answers," Resende said.

The non-profit will be re-conducting their Gemini testing over the next few weeks to see whether Google lives up to their commitments, Resende said.

ADVERTISEMENT
While no one person, institution or company can guarantee elections are free and fair, we can make meaningful progress in protecting everyone’s right to free and fair elections.
Microsoft

In a statement to Euronews Next, Microsoft outlined its actions ahead of the European elections, including a set of election protection commitments that "help safeguard voters, candidates, campaigns and election authorities".

Within these commitments is providing voters with “authoritative election information” on Bing.

"While no one person, institution or company can guarantee elections are free and fair, we can make meaningful progress in protecting everyone’s right to free and fair elections," Microsoft’s statement reads.

OpenAI did not respond to Euronews Next’s request for comment.

The company explained in a statement on its website that its approach to elections-related content is to "continue platform safety work by elevating accurate voting information," and improving their company’s transparency.

ADVERTISEMENT

Risk assessments should be published

In February, the European Commission passed the Digital Services Act (DSA), which requires very large online platforms (VLOP) like Google, Microsoft, and OpenAI to conduct risk assessments for the dissemination of fake news and misinformation on their platforms.

These risk assessments would include any "intentional manipulations” of their services and their potential impacts on “electoral processes".

The DSA was touted at the time by Margrethe Vestager, executive vice-president of the European Commission for a Europe Fit for the Digital Age, as "a big milestone" and a major part of the European Union’s strategy for "shap[ing] a safer and more transparent online world".

However, Democracy Reporting International’s report suggests that the requirements for the DSA, including these risk assessments, testing, and training to mitigate election-related risks are not being met.

So, Resende said the Commission or the companies behind the chatbots should publish these assessments.

ADVERTISEMENT

"I’m afraid they’re reluctant to share [the risk assessments] with the public either because they didn’t do it or because they are not confident in the level of detail that they’ve invested in this," Resende said.

While they did not respond directly to this study, a spokesperson said in an email the Commission “remains vigilant on the negative effects of online disinformation, including AI-powered disinformation".

A month after the DSA’s official launch, the Commission launched an inquiry for information to Bing and Google Search to gather more information about their "mitigation risks linked to generative AI".

The Commission confirmed to Euronews Next that they are reviewing information they’ve received under this inquiry, but did not further elaborate.

The Commission also signed a Code of Practice on Misinformation in March with platforms like Google and Microsoft, where they agreed to promote "high quality and authoritative information to voters".

ADVERTISEMENT
Share this articleComments

You might also like