If you ask Gemini, Google’s AI chatbot, about elections or anything voting-related, you’ll now be told to Google it. It is the latest attempt to stop AI manipulating voters.
Google is restricting its AI chatbot from answering election-related questions in countries where voting is taking place this year, as the company tries to avoid spreading disinformation.
Now, when you ask Gemini an election-related question, it responds with: “I’m still learning how to answer this. In the meantime, try Google Search.”
The response appears for questions around voting, politicians and political parties.
A Google spokesperson told Sky News the restrictions had been put in place “in preparation for the many elections happening around the world in 2024 and out of an abundance of caution”.
In February, Google stopped Gemini generating images after it created a series of inaccurate historical depictions of people.
The model had been trained to reflect a diverse range of people but had become “way more cautious than we intended”, according to Google’s senior vice president Prabhakar Raghavan.
This year, there are elections in more than 50 countries. As artificial intelligence becomes more powerful, concerns are growing it could be used to manipulate voters.
Just two days before Slovakia’s election in September last year, a faked audio recording was posted to Facebook.
It sounded like one of the candidates and a journalist discussing how to rig the election. The audio was quickly flagged as a fake generated by AI but that didn’t stop it spreading.
The candidate narrowly lost the election.
Now, tech firms and governments are being increasingly cautious in the run-up to voting.
Meta, which owns Facebook, Instagram and WhatsApp, is forming a team that will tackle disinformation and abuse of artificial intelligence in the run-up to the European Parliament elections in June.