Trends-UK

Brits believe the bots even though study finds they’re often talking nonsense

AI assistants can sometimes provide misleading or incorrect answers. However, almost half of British consumers using the services put more faith in them than they maybe should.

Consumer stalwart Which? put the tools through their paces and found that the consumer advice dispensed could be unclear, risky, or downright dangerous if followed.

It’s something the IT world is all too familiar with. AI-powered assistants have their place, but it is also important to understand their limitations and spot arrant emissions.

Which? surveyed more than 4,000 UK adults about their use of AI and also put 40 questions around consumer issues such as health, finance, and travel to six bots – ChatGPT, Google Gemini, Gemini AI Overview, Copilot, Meta AI, and Perplexity. Things did not go well.

Meta’s AI answered correctly just over 50 percent of the time in the tests, while the most widely used AI tool, ChatGPT, came second from bottom at 64 percent. Perplexity came top at 71 percent. While different questions might yield different results, the conclusion is clear: AI tools don’t always come up with the correct answer.

The problem is that consumers trust the output. According to Which?, just over half (51 percent) of the respondents use AI to search the web. Of these, almost half (47 percent) said “they trusted the information they received to a ‘great’ or ‘reasonable’ extent.” Which? said the figure rose to 65 percent for frequent users.

Then there were the sources used by the AI services. Where references were clear, some used old forum posts, while others relied on sources such as Reddit threads. Although these can sometimes be valid sources of information, they might not be as authoritative as the confident tone of an AI chatbot indicates.

Which? found the chatbots generated wrong information all too frequently, noting: “As many as one in six (17 percent) people surveyed said they rely on AI for financial advice, yet responses to many money queries were worrying.” These included tax code checks or advice on ISA allowances that could easily leave a user in hot water.

Andrew Laughlin, tech expert at Which? said: “Everyday use of AI is soaring, but we’ve found that when it comes to getting the answers you need, the devil is in the details.

“Our research uncovered far too many inaccuracies and misleading statements for comfort, especially when leaning on AI for important issues like financial or legal queries.”

As the use of AI assistants continues to rise, so too do the risks. While the IT industry is aware of the dangers involved – a recent analysis showed that AI-assisted developers can produce three or four times the code of their unassisted peers but also generate ten times more security issues – consumers might be forgiven for being less tech-savvy, particularly considering the hype surrounding the technology.

Laughlin ended with a warning: “When using AI, always make sure to define your question clearly, and check the sources the AI is drawing answers from. For particularly complex issues, always seek professional advice – particularly for medical queries – before making major financial decisions or embarking on legal action.” ®

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button