More than half of all chatbot suggestions lean towards two dominant political blocs, a data watchdog has found
The Dutch data protection authority (PA) has warned voters not to rely on artificial intelligence chatbots for advice ahead of national elections, saying the tools provide unreliable information and could point users towards two major opposition parties.
AI’s advice disproportionately favored two vanguard blocs – the right-wing Freedom Party (PVV) and the left-wing GroenLinks-PvdA alliance – accounting for 56% of responses, a concentration that contrasts with the highly fragmented 15-party Dutch parliament, the regulator said. Opinion polls project that the two blocs could win just over a third of the votes in the Oct. 29 elections, he added.
According to the report, some parties, including the center-right CDA, “they are almost never mentioned, even when user input exactly matches the positions of one of these parties.”
“Chatbots may seem like smart tools, but as voting aids, they consistently fail.” said the vice president of the control body, Monique Verdier, describing its operation as “unclear and difficult to verify.”
He said the technology risked steering voters towards a party that did not necessarily reflect their political views.
“We therefore caution against using chatbots with artificial intelligence to give voting advice.” Verdier added.

The agency tested four major chatbots, which it did not name, and found that they sometimes advised voting for one of the two major parties, even when they explicitly fed the campaign platform of a smaller party.
The early elections in the Netherlands were triggered months ago by the collapse of the right-wing coalition after the departure of the PVV, led by deputy Geert Wilders. The vote is widely seen as a contest between forming a new all-conservative government or a more centrist or center-right coalition.
An international study coordinated by the European Broadcasting Union and the BBC found that leading AI assistants, including ChatGPT and Google’s Gemini, distorted news content in almost half of their responses. The research analyzed more than 3,000 AI-generated responses in 14 languages and concluded that 45% contained “at least one major problem” when addressing news-related queries.
OpenAI and Microsoft have previously acknowledged that so-called “hallucinations” – cases where an AI system generates incorrect or misleading information – remain an issue they are working to address.
You can share this story on social networks:
