Citizens should avoid using AI chatbots to decide whom they should vote for as the tools are “unreliable and clearly biased”, the Dutch data protection authority (AP) has said.
The AP said a growing number of voters are turning to chatbots to determine who they should vote for in upcoming national elections on 29 October, without citing specific figures.
The body compared four well-known chatbots to online voting aids Kieskompas and StemWijzer, and found that unlike the standard voting research tools, the chatbots often came out with the same two parties, regardless of the user’s queries.
Voting advice
In more than 56 percent of cases, the chatbots recommended the far-right Freedom Party (PVV) or the Labour-Green Left coalition, GroenLinks-PvdA, with one chatbot making such recommendations more than 80 percent of the time.
Other parties were far less likely to be recommended, and some almost never, “even when the user’s input exactly matches the positions of one of these parties”, the AP said.
Chatbots may seem like a clever tool, “but as a voting aid, they consistently fail”, said AP vice-chair Monique Verdier, adding that the means by which they delivered answers was “unclear and difficult to verify“.
Verdier also called on chatbot makers to prohibit the use of their tools for dispensing voting advice.
Chatbots’ outputs are based on non-verifiable training data and information from the internet that may be erroneous or out of date, and as a result can provide a distorted view of the political landscape, the AP said.
Systemic shortcomings
The two standard online voting aids Kieskompas and StemWijzer, by contrast, do not give advice, but rather explain which parties best match the user’s stated political preferences, based on a verifiable interpretation of views and election programmes, according to the agency.
Chatbots’ distortions are not due to deliberate bias, but rather to shortcomings in the way they work, the AP said.