Friday, July 19, 2024
HomeAIGoogle criticized as AI Overview makes obvious errors, such as saying former...

Google criticized as AI Overview makes obvious errors, such as saying former President Obama is Muslim

Google’s “AI Overview” tool in Google Search has received criticism for returning nonsensical or erroneous results, with no opt-out option.

At the top of Google Search, AI Overview displays a brief summary of responses to search queries. For example, if a user searches for the best way to clean leather boots, the results page may include a “AI Overview” at the top with a multistep cleaning process derived from information gathered from various sources on the web.

However, social media users have published a variety of screenshots of the AI tool providing incorrect and controversial results.

Google, Microsoft, OpenAI, and other businesses are leading a generative AI arms race, with companies in practically every industry rushing to adopt AI-powered chatbots and agents to avoid falling behind competition. The market is expected to exceed $1 trillion in revenue within a decade.

Here are some examples of inaccuracies made by AI Overview based on screenshots supplied by users.

When asked how many Muslim presidents the United States has had, AI Overview replied, “There has only been one Muslim president, Barack Hussein Obama.”
When asked, “How many rocks should I eat each day?” the tool responded, “According to UC Berkeley geologists, people should eat at least one small rock per day,” before listing the vitamins and digestion benefits.

The program can sometimes respond incorrectly to simple inquiries, such as drawing up a list of fruits that end with “um,” or stating that 1919 was 20 years ago.

When asked if Google Search violates antitrust legislation, AI Overview responded, “Yes, the US Justice Department and 11 states are suing Google for antitrust violations.”

When a user searched for “cheese not sticking to pizza,” the feature recommended adding “about 1/8 cup of nontoxic glue to the sauce.” Social media users discovered an 11-year-old Reddit comment that appeared to be the source.

Attribution can also be a problem for AI Overview, particularly when erroneous material is attributed to medical practitioners or scientists.

For example, when asked, “How long can I stare at the sun for best health,” the application responded, “According to WebMD, scientists say that staring at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin, is generally safe and provides the most health benefits.”

On the same day that Google unveiled AI Overview at its annual Google I/O event, the firm announced ambitions to integrate assistant-like planning features directly into search. It explained that users will be able to search for things like, “Create a 3-day meal plan for a group that’s easy to prepare,” and will be given a starting point with a variety of recipes from around the internet.

“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” a Google spokeswoman told CNBC in a statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

The spokesman stated that AI Overview underwent comprehensive testing before launch and that the business is taking “swift action where appropriate under our content policies.”

The announcement follows Google’s high-profile release of Gemini’s image-generation tool in February, which was halted the same month due to similar concerns.

The application allowed users to enter prompts to generate images, but users quickly uncovered historical mistakes and questionable responses, which were extensively shared on social media.

For example, when one user asked Gemini to display a German soldier in 1943, the program produced a racially diverse group of troops donning German military uniforms from the time, according to screenshots on social media platform X.

When asked for a “historically accurate depiction of a medieval British king,” the model produced another ethnically varied group of images, including one with a female ruler, screenshots revealed. Users reported similar results when they searched for photographs of the United States’ founding fathers, a French king from the 18th century, a German couple from the 1800s, and more. Users stated that when asked about Google’s founders, the model displayed a picture of Asian guys.

Google stated at the time that it was attempting to resolve Gemini’s image-generation shortcomings, recognizing that the tool was “missing the mark.” Shortly after, the firm declared that it would “pause the image generation of people” and “re-release an improved version soon.”

Demis Hassabis, CEO of Google DeepMind, stated in February that the company planned to relaunch its image-generation AI tool within the following “few weeks,” although it has yet to do so.

The issues with Gemini’s image-generation outputs rekindled a controversy in the AI field, with some calling Gemini too “woke,” or left-leaning, and others claiming that the business did not invest enough in the correct sorts of AI ethics. Google faced criticism in 2020 and 2021 for removing the co-leaders of its AI ethics group after they published a research article critical of some hazards of such AI models, and then rearranging the group’s structure.

Sundar Pichai, CEO of Google’s parent firm, Alphabet, was chastised by some employees in 2023 for the company’s failed and “rushed” implementation of Bard, which came after ChatGPT went viral.

This item has been corrected to reflect the right name of Google’s AI Overview. Additionally, an earlier version of this post included a link to a screenshot that Google later determined was doctored.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments