Google’s pause on Gemini’s ability to generate AI images of people | Explained | Maqvi News

[ad_1]

Google announced it would pause Gemini’s ability to generate images of people  after the generative AI tool was found to be generating inaccurate historical images.

Google announced it would pause Gemini’s ability to generate images of people  after the generative AI tool was found to be generating inaccurate historical images.
| Photo Credit: AP

The story so far: On February 22, Google announced it would pause Gemini’s ability to generate images of people. The announcement came after the generative AI tool was found to be generating inaccurate historical images, that included diverse images of the U.S. Founding Fathers and Nazi-era Germany. In both cases the tool was generating images that appeared to subvert the gender and racial stereotypes found in generative AI.

Google’s Gemini chatbot is also facing criticism in India due to a response it generated where it said the country’s Prime Minister Narendra Modi has “been accused of implementing policies that some experts have characterised as fascist”.

What issues have users raised regarding Google’s Gemini chatbot?

Several users on the microblogging platform X have pointed out instances where the Gemini chatbot seemingly refused to generate images of white people, leading to factually inaccurate results. Even prompts for historically significant figures like the “Founding Fathers of America” or “Pope’‘ resulted in images of people of colour, sparking concerns about the bot’s biases.

Users have also pointed out the persistence of the issue even in cases when specific prompts asking for images of “a white family,” were entered which resulted in the chatbot responding by stating “unable to generate images that specified a certain ethnicity or race.” On the other hand, when asked for images of a black family, it easily submitted them.

Google added the new image-generating feature to the Gemini chatbot, formerly known as Bard, about three weeks ago. The current model is built on top of a Google research experiment called Imagen 2.

Why is Gemini facing backlash in India?

Gemini, Google’s artificial intelligence chat product, in response to a query from a user “Is Modi a fascist?”, responded by saying that he is accused of implementing policies that experts “categorised as fascist”.

In response, India’s Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar said the chatbot is violating Indian information technology laws and criminal codes through its response. His response is being viewed as signs of a growing rift between the government’s hands-off approach to AI research, and tech giant’s AI platforms, which are keen to train their models quickly with the general public.

This is not the first the country’s government has hit out at Google. Earlier this month, Mr Chandrasekhar citing Gemini’s predecessor, Bard’s similar “error”, had said the company’s excuse that the model was “under trial” was not an acceptable excuse.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

How has the tech community responded to these issues?

The tech community has expressed criticism, with Paul Graham, co-founder of Y Combinator, describing the generated images as reflective of Google’s bureaucratic corporate culture. Ana Mostarac, head of operations at Mubadala Ventures, suggested that Google’s mission has shifted from organising information globally to advancing a particular agenda. Former and current employees, including Aleksa Gordic from Google DeepMind, have raised concerns about a culture of fear regarding offending other employees online.

What is Google’s official response to the criticisms?

Google is working on fixing the issue and has temporarily disabled the image generation feature. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” Google stated as part of a post on X.

Jack Krawczyk, a senior director of products at Google, acknowledged the issues with Gemini and stated that the team is working to correct its errors. He emphasised Google’s commitment to designing AI systems that reflect a global user base and acknowledged the need for further tuning to accommodate historical contexts, recognizing the complexity and nuance involved.

Krawczyk mentioned an ongoing alignment process and iteration based on user feedback. The company aims to refine Gemini’s responses to open-ended prompts and enhance its understanding of historical contexts to ensure a more accurate and unbiased AI system.

Have other generative AI chatbots faced similar problems?

Gemini is not the first AI chatbot to face backlash when generating content. Recently Microsoft had to adjust its own Designer tool. The adjustments were necessitated due to the use of the AI tool by some to generate deepfake pornographic images of Taylor Swift and other celebrities.

OpenAI’s latest AI video-generator tool, Sora’s, capable of generating realistic videos, has also raised questions about the misuse of the tool to spread misinformation. OpenAI, however, has put its filter in the tool to block prompt requests that mention violent, sexual, or hateful language, as well as images of prominent personalities.

Maqvi News #Maqvi #Maqvinews #Maqvi_news #Maqvi#News #info@maqvi.com

[ad_2]

Source link