Trends-UK

Don’t blindly trust what AI tells you, Google boss tells BBC

However, some experts say big tech firms such as Google should not be inviting users to fact-check their tools’ output, but should focus instead on making their systems more reliable.

While AI tools were helpful “if you want to creatively write something”, Mr Pichai said people “have to learn to use these tools for what they’re good at, and not blindly trust everything they say”.

He told the BBC: “We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors.”

The company displays disclaimers on its AI tools to let users know they can make mistakes.

But this has not shielded it from criticism and concerns over errors made by its own products.

Google’s rollout of AI Overviews summarising its search results was marred by criticism and mockery over some erratic, inaccurate responses.

The tendency for generative AI products, such as chatbots, to relay misleading or false information, is a cause of concern among experts.

“We know these systems make up answers, and they make up answers to please us – and that’s a problem,” Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4’s Today programme.

“It’s okay if I’m asking ‘what movie should I see next’, it’s quite different if I’m asking really sensitive questions about my health, mental wellbeing, about science, about news,” she said.

She also urged Google to take more responsibility over its AI products and their accuracy, rather than passing that on to consumers.

“The company now is asking to mark their own exam paper while they’re burning down the school,” the said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button