As of 23 February 2024, Googleâs new-model AI chatbot, Gemini, has been debarred from creating images of people, because it canât resist drawing racist ones. Itâs not that it is producing bigoted caricaturesâthe problem is that it is curiously reluctant to draw white people. When asked to produce images of popes, Vikings, or German soldiers from World War II, it keeps presenting figures that are black and often female. This is racist in two directions: it is erasing white people, while putting Nazis in blackface. The company has had to apologise for producing a service that is historically inaccurate andâwhat for an engineering company is perhaps even worseâbroken.
This cock-up raises many questions, but the one that sticks in my mind is: Why didnât anyone at Google notice this during development? At one level, the answer is obvious: this behaviour is not some bug that merely went unnoticed; it was deliberately engineered. After all, an unguided mechanical process is not going to figure out what Nazi uniforms looked like while somehow drawing the conclusion that the soldiers in them looked like Africans. Indeed, some of the texts that Gemini provides along with the images hint that it is secretly rewriting usersâ prompts to request more âdiversity.â
The real questions, then, are: Why would Google deliberately engineer a system broken enough to serve up risible lies to its users? And why did no one point out the problems with Gemini at an earlier stage?
Part of the problem is clearly the culture at Google. It is a culture that discourages employees from making politically incorrect observations. And even if an employee did not fear being fired for her views, why would she take on the risk and effort of speaking out if she felt the company would pay no attention to her? Indeed, perhaps some employees did speak up about the problems with Geminiâand were quietly ignored.
The staff at Google know that the company has a history of bowing to employee activism ifâand only ifâit comes from the progressive left; and that it will often do so even at the expense of the business itself or of other employees. The most infamous case is that of James Damore, who was terminated for questioning Googleâs diversity policies. (Damore speculated that the paucity of women in tech might reflect statistical differences in male and female interests, rather than simply a sexist culture.) But Google also left a lot more money on the table when employee complaints caused it to pull out of a contract to provide AI to the US militaryâs Project Maven. (To its credit, Google has also severely limited its access to the Chinese market, rather than be complicit in CCP censorship. Yet, like all such companies, Google now happily complies with take-down demands from many countries and Gemini even refuses to draw pictures of the Tiananmen Square massacre or otherwise offend the Chinese government).
There have been other internal ructions at Google in the past. For example, in 2021, Asian Googlers complained that a rap video recommending that burglars target Chinese peopleâs houses was left up on YouTube. Although the company was forced to admit that the video was âhighly offensive and [...] painful for many to watch,â it stayed up under an exception clause allowing greater leeway for videos âwith an Educational, Documentary, Scientific or Artistic context.â Many blocked and demonetised YouTubers might be surprised that this exception exists. Asian Googlers might well suspect that the real reason the exception was applied here (and not in other cases) is that Asians rank low on the progressive stack.