fbpx

Google’s AI fails ‘will slowly erode our trust’ in the tech giant

Google's AI fails 'will slowly erode our trust' in the tech giant

Google’s AI – Google responded by taking down the responses and saying it’s using the errors to improve its systems. 

AI Overview is intended to deliver generative AI-based solutions to search queries. Normally, it does that. However, in the previous week, it has also informed users that they can use nontoxic glue to keep cheese from sliding off their pizza, that they can eat one rock every day, and that Barack Obama was the first Muslim president.

Google (GOOG, GOOGL) had a busy Memorial Day weekend as it worked to control the impact of a number of wild suggestions made by the new AI Overview feature in its Search platform. If you were sunning yourself on a beach or drinking hotdogs and beer instead of scrolling through Instagram (META) and X, let me catch you up.

In response, Google removed the comments and said it would use the mistakes to make improvements to its systems. However, the instances might gravely harm Google’s reputation, especially in light of the app’s ability to produce historically incorrect photographs and the company’s disastrous Gemini image generator debut.

Associate professor of computer science and engineering at NYU’s Tandon School of Engineering Chinmay Hegde clarified, “Google is supposed to be the premier source of information on the internet.” “And our trust in Google will gradually decline if that product is compromised.”

Google’s AI flubs

Google’s AI Overview issues aren’t the first time the corporation has encountered difficulties since launching its generative AI program. The company’s Bard chatbot, which Google relaunched as Gemini in February, notoriously made an error in one of its responses in a promotional video in February 2023, causing Google shares to plummet.

image 5
On May 14, 2024, Alphabet CEO Sundar Pichai gives a speech during a Google I/O event in Mountain View, California. Social media users have been sharing bloopers, some amusing and some unsettling, ever since Google changed its search page and started showing summaries produced by artificial intelligence above search results. (AP Photo/File: Jeff Chiu) (PRESS ASSOCIATED)

Then there was the Gemini picture generator software, which produced photographs of various groups of people in incorrect circumstances, including German soldiers in 1943.

Google made an effort to address the historical bias in AI by integrating a greater mix of ethnicities when creating images of individuals. However, the software ended up rejecting some requests for photographs of people from particular backgrounds because the business overcorrected. In response, Google apologized for the incident and took the program offline for a short while.

Meanwhile, the problems with the AI Overview surfaced when Google claimed that users were posing unusual queries. A Google representative explained that the rock-eating incident involved a website that syndicated articles about geology from other sources into their platform, which included an article that first appeared on the Onion. AI Overviews provided a link to the information.

While there are valid justifications, it is becoming increasingly annoying that Google keeps releasing products that have defects that it then has to justify.

According to Derek Leben, an associate teaching professor of business ethics at Carnegie Mellon University’s Tepper School of Business, “you have to stand by the product that you roll out at some point.”

“In terms of just trust in the products themselves, you can’t just say… ‘We are going to incorporate AI into all of our well-established products, and also it’s in constant beta mode, and any kinds of mistakes or problems that it makes we can’t be held responsible for and even blamed for,”

The most popular website for internet fact-finding is Google. Whenever I argue with a friend about something trivial, one of us always exclaims, “Okay, Google it!” It’s likely that you have also done this. Perhaps not out of a desire to outsmart your pal with some arcane Simpsons trivia, but nonetheless. The bottom line is that Google’s credibility has been eroded by its AI gaffes, which are gradually damaging its brand.

A race to beat the competition

Why therefore the mistakes? Hegde claims that in an attempt to outmanoeuvre rivals like Microsoft (MSFT) and OpenAI, the company is just operating at a faster pace and launching products before they are ready.

“All these surface-level issues are caused by the fact that research is moving so quickly that the gap between research and product appears to be closing,” he said.

Since Microsoft and OpenAI partnered up to debut a generative AI-powered version of their Bing search engine and chatbot in February 2023, Google has been working hard to dispel reports that it has lagged behind. A day before Google’s I/O developer conference began this month, OpenAI announced the release of its potent GPT-4o AI model, effectively cutting off the search giant.

However, Google runs the risk of giving people the perception that its generative AI initiatives are unreliable and ultimately unworthwhile if its pursuit of market dominance necessitates the release of products that produce errors or damaging information.

Read More

How a Mysterious Tip Led to Trump’s Conviction

NASA: Boeing Set A New Astronaut Rocket Launch

Leave a Reply