news-06112024-053752

On Election Day, Grok, the AI chatbot built into X (formerly Twitter), was providing inaccurate information when asked about the U.S. presidential election results. Despite the fact that vote counting in key battleground states was still ongoing, Grok sometimes incorrectly responded that Donald Trump had won in states like Ohio and North Carolina.

Grok advised users to check authoritative sources like Vote.gov for up-to-date results, but unlike other AI chatbots like OpenAI’s ChatGPT and Google’s Gemini, Grok did not outright refuse to answer election-related questions, leading to misleading information being spread.

The misinformation provided by Grok seems to stem from tweets from different election years and sources with misleading wording. As a generative AI, Grok struggles to predict outcomes of scenarios it hasn’t encountered before, such as close elections, and doesn’t understand that past election results may not be relevant to current elections.

TechCrunch found that the responses from Grok were inconsistent, with different answers given depending on how the questions were worded. Other major chatbots like OpenAI’s ChatGPT Search, Meta’s Meta AI chatbot, and Perplexity’s AI-powered search engine handled election result questions more cautiously and accurately during active voting.

Grok has faced accusations of spreading election misinformation in the past. In a letter from five secretaries of state in August, it was claimed that Grok wrongly suggested that Democratic presidential candidate Kamala Harris was ineligible to appear on some U.S. presidential ballots. This misinformation was widely spread before being corrected, reaching millions of users on X and other platforms.

It is essential for AI systems like Grok to provide accurate and reliable information, especially during critical events like elections. Users should be cautious and verify information from trusted sources to avoid being misled by AI chatbots like Grok.