Top News

Grok Verify: Grok has become the biggest fact checker, how justified is people's increasing trust in it?
Shikha Saxena | May 19, 2025 6:15 PM CST

In recent times, you must have seen on Twitter that people are tagging @GROK in any post and asking if it is true. People are trusting the answer that Grok is giving. During the India-Pakistan conflict, when bullets were being fired on the ground, a different war was going on on the Internet, the battle between lies and truth. While the Press Information Bureau (PIB) and independent fact-checkers were busy debunking false news and false information created by AI, many users started resorting to tools like Grok and AI chatbot Ask Perplexity, but are these AI chatbots reliable fact-checkers?

Why are AI chatbots not reliable fact-checkers?

1. Hallucination: Presenting lies as truth

Chatbots like Grok and Perplexity sometimes create completely wrong information, but confidently present it as truth. These bots do not check sources or follow any editorial standards. xAI itself has admitted this in its Terms of Service: “AI output may contain imaginative things that are not accurate.”

2. Bias and lack of transparency

According to Mahadevan, chatbots reflect the same ideas based on which they have been trained. For example, Grok recently promoted racist and misleading claims like ‘white genocide’, which were linked to the personal views of Elon Musk.

3. Scale and speed: one mistake, reach millions

Chatbots like Grok can reach millions of people instantly, so any mistake can have a wide impact. When people start accepting the wrong information from these chatbots as evidence, the danger increases manifold.

Why do people still trust?

AI models are "non-deterministic," i.e., the answer to the same question will not be the same every time. A big reason for this is the "temperature" setting. When Grok sometimes gives the correct answer, users start considering it completely reliable, and this is the biggest mistake.

What should AI companies do?

Prioritize accuracy: If the source is not authentic, then the bot should avoid giving answers.
Mark unreliable answers: It is important to flag such answers.
Bring transparency: It should be clear from which source the information has been taken.

Disclaimer: This content has been sourced and edited from Amar Ujala. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.


READ NEXT
Cancel OK