Google admits it tuned its AI image generation to produce nonsense:

When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people. And because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).

However, if you prompt Gemini for images of a specific type of person — such as “a Black teacher in a classroom,” or “a white veterinarian with a dog” — or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for.

So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

What happened with Gemini image generation (blog.google)

By admitting they’ve tuned the system to produce a desired result, this leads to questions about their tuning of search services too?

All of the examples that have been shown in the media or online (that I could find) were biased in one political direction – towards the left. If this were a random problem, there should be some examples of bias towards the right. For example, if a request to draw a photo of a Catholic Pope results in an image of a black or Indian woman (which it did do), would a request to draw a photo of a U.S. 1850 slave result in occasional images of white men or women? That this did not seem to occur, implies this was manually turned to generate the incorrect images.

And that leads directly to wondering if Google’s search results have been tuned to produce someone’s desired result? If so, Google and our future AI overloads have the potential of becoming the most powerful forms of propaganda in world history.

For Google to restore trust, they need to be completely transparent in explaining how this bias was occurring in their systems and how they will prevent that in the future.

A good start would be to create an “audit” function for their AI, that would explain how a prompt leads to the specific output. I built something like that in an early “expert system” that I created in the 1980s – the user could see precisely how the conclusion was reached.

Update: Google CEO tells employees Gemini AI blunder ‘unacceptable’ (cnbc.com)

“I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” Pichai said.

Coldstreams