Google’s Artificial Intelligence Gives Strange Answers: Can the Danger Be Averted?
Google’s Artificial Intelligence Gives Strange Answers: Can the Danger Be Averted?
Google’s artificial intelligence summaries tool has recently started giving users strange answers. Here are the details!
Google’s summary tool, which sits alongside classic search results, uses artificial intelligence to gather data from the internet to answer questions and provide summaries. Google aims to make it faster and easier for users to find the information they are looking for.
While Google’s AI-powered summarization tool for search results is exciting, in recent days it has unexpectedly started to produce funny and even dangerous results.
Why Google Defended the AI Summary Tool
The AI-powered summary tool encouraged some users to eat rocks or make pizza with glue. Not only that, but it also presented conspiracy theories about Barack Obama being a Muslim, which contained a lot of misinformation.
It seems that some of these reported summaries were taken from results on the internet. Because the suggestion that a pizza topping can be made more chewy with glue is said to have originated from a joke shared on Reddit.
Google, on the other hand, stated that these absurd examples are rare and claimed that the feature generally works properly, saying, “The examples we see are generally very rare and are not representative of most people’s experiences.”
Google stated that it has taken various measures to prevent this problem by conducting various evaluations and tests to ensure that the AI-powered summarization tool does not present absurd and dangerous results.
Some experts argue that such problems are inherent in these systems. Large language models can generate complex language structures and word games. But these models can fail to check whether the text they generate is real.