Skip to main contentSkip to navigationSkip to navigation
A young man in a black suit looks down at a camera as a reporter puts a phone in front of him to record his comments
Sam Altman’s dramatic return as chief executive of OpenAI means that the tech company’s board is now made up exclusively of white men. Photograph: Alex Wong/Getty Images
Sam Altman’s dramatic return as chief executive of OpenAI means that the tech company’s board is now made up exclusively of white men. Photograph: Alex Wong/Getty Images

Where are all the ‘godmothers’ of AI? Women’s voices are not being heard

This article is more than 5 months old
Luba Kassova

Amid the coverage of Sam Altman returning to the helm of OpenAI, women are being written out of the future of AI

‘We are heading toward the best world ever,” said Sam Altman in an interview earlier this month, just before the saga of his firing and rehiring as OpenAI’s chief executive. As an expert on gender equity in news, this statement made me wonder: whose world was heading towards being the best ever?

As it turns out, the one the Altman team is crafting is largely devoid of women. My analysis amid the furore around his dismissal revealed fascinating insights: for example, of the 702 (out of 750) employees who signed the letter demanding Altman’s reinstatement more than 75% were men, a gender imbalance that matches that identified in AI teams in McKinsey’s The State of AI in 2022 report.

After Altman’s return, OpenAI’s newly established board of directors is now made up exclusively of white men, a situation compounded by male dominance among executives. Where are the voices of female AI leaders and experts in coverage of this most dramatic of Silicon Valley stories?

Women’s role in crafting our AI-infused future and shaping the news around generative AI has concerned me for some time. From analysing data and conversations with experts, I realise that, whether as developers, news editors or AI experts, women are largely absent from the AI world.

Generative AI (GAI) relies on processing vast datasets of text, images and video, all of which have featured overwhelmingly more men than women in the past. This inherited male bias, mirrored in the news, combined with the structural gaps women face in society today, results in a narrative about GAI’s risks, limitations, opportunities and direction shaped primarily by men.

AKAS’s pronoun analysis of GDELT Project’s global online news database shows that so far this year men have been quoted 3.7 times more frequently than women in news about AI in English-speaking nations. According to the most recent Global Media Monitoring Project results, only 4% of news stories focusing on science, technology, funding discoveries and developments centred around women.

An assessment by AKAS of tech news editors in April shows that in Britain and the US only 18% and 23% respectively were female. Men are between three and five times more likely than women to be deciding what constitutes a technology story.

Concern has been expressed about the long-term threats that AI poses to humanity, but what are the immediate risks of a world reflecting predominantly men’s perspectives? We must urgently seek to intercept the damaging absence of women and the lack of understanding of their needs, worries and experiences related to AI.

According to 2022 Pew Research Center data, US women are between 8% and 16% more concerned than men about a variety of AI developments, from diagnosing medical problems to performing repetitive tasks.

Given women’s peripheral presence in the AI industry and muted voice in news, their concerns are unlikely to be captured, let alone addressed in future developments.

Leslie McIntosh, vice-president of research integrity at Digital Science, says: “If your perspective is not reported, you are not in the story. GAI takes those historical texts and is building and projecting our future. So where women’s missing voices were once crevices, they have now become large gaps.

Nicholas Diakopoulos, professor in communication studies at Northwestern University in Chicago, says: “Disparities in representation of race, gender or different occupations [in generative AI models] are important, since if the media uses these kinds of models uncritically to illustrate stories, they could easily perpetuate biases embedded in the training data of the models.”

Tasha McCauley in 2014. She has just been ousted from OpenAI’s board along with Helen Toner. Photograph: Jerod Harris/Getty

According to the BBC’s head of technology forecasting, Laura Ellis, it is hard to know which existing biases AI-generated content is amplifying most because “we simply don’t know what datasets have been used to train these models”. But no one is asking these questions.

“Where are the ‘godmothers’ of AI?” asks Ellis.

So, who are the experts amplified in communicating GAI developments? Essentially, a handful of white western men, mostly from the US.

skip past newsletter promotion

Lynette Mukami, social, search and analytics editor at Kenya’s Nation Media Group, a leading news outlet, argues that conversation about AI centres on profits and efficiencies, with little focus on its potential for social good.

“A lot of this has to do with who is leading the conversation. It’s white men, so the perspectives are of white men. If we had more female techies and thought-leaders in the conversation, we might see some different AI solutions,” she says.

Louder than any other voice in AI news is that of Altman, the colossus of the multibillion-dollar GAI industry. This concerns Prof Maite Taboada, of Simon Fraser University in Vancouver, who leads the Gender Gap Tracker: “Sam Altman went on a tour […] after launching ChatGPT and has managed to dominate the narrative since. I’m personally concerned about that and I know that many others are concerned that he’s dominating the conversation, particularly around regulation. He just takes up a lot of oxygen.”

This is borne out by AKAS’s GDELT analysis of 2023 global online news coverage: mentions of Altman in articles referencing AI are twice the combined total of 42 women in the recent Top 100 list of AI influencers in Time magazine.

Now Altman is back, heralded as a visionary who went on a voyage, fought the monster who attempted to slay him and returned a hero. His voice will most certainly continue to dominate. What will happen to the voices of those who thought he was moving too fast in the race for GAI-market dominance at the expense of humanity’s safety?

What can be done to ensure that concerns voiced by women such as Helen Toner and Tasha McCauley (both now ousted from OpenAI’s board), or those of the women on the fringes of the AI industry, are not squandered? While there is much debate about the effectiveness of the “guard rails” (coding aimed at correcting data biases), I detected a consensus among the experts I spoke with: AI itself can remedy the diversity deficit.

“What gets measured, gets managed,” says Lars Damgaard Nielsen, Mediacatch.io’s chief executive, and a proponent of using AI to track gender and ethnic bias in the media.

He and other experts argue that an effective way of correcting male bias would be to use AI to measure women’s share of presence within the discourse, flagging to us humans the vital need to seek the perspectives of all genders, groups and cultures on one of the century’s most far-reaching stories.

Most viewed

Most viewed