Yennie Jun knew the results were worrisome. A machine learning (ML) engineer by day and a hobbyist blogger of content about ML and artificial intelligence (AI) by night, in a recent experiment, she asked two Large Language Models (LLMs) about what each considered the most important people in history. She repeated the process 10 times for 10 different languages. Some names, like Gandhi and Jesus, appeared frequently. Other names, like Marie Curie or Cleopatra, less frequently. Compared to the number of male names generated by the models, there were few female names overall.
“The biggest question I had was: Where were all the women?” Jun says in a recounting of the experiment in her blog.
(This feature is part of a larger content package honoring SC Media's 2023 Women in IT Security. Click here for full coverage)
Jun says even when prompted in several different languages, such as Russian, Korean, and Chinese, the historical figures were overwhelmingly male, Jun tells SC Media. The phenomena occurred for two different LLMs - Anthropic and OpenAI - probed by Jun.
“This is concerning because the LLMs are reflecting biases that already exist in society and on the texts that they were trained on — they perpetuate and remix these biases and, perhaps, exacerbate them in new ways,” says Jun. “Even if history books also reflect these biases, the LLMs perpetuate the myth that only male historical figures were important.”
With the rise in the pervasiveness of AI technology in the last year, particularly with the introduction of OpenAI’s ChatGPT to the masses, many are voicing concerns about gender bias in the technology, as well as gender inequities in professional AI roles globally.
Celine Caira, Artificial Intelligence Economist/Policy Analyst at The Organisation[GB1] for Economic Co-operation and Development (OECD), an intergovernmental organization that is dedicated to developing policy, says while AI is recognized as a powerful tool for economic empowerment, it can be a double-edged sword for gender equality if representation, bias, and discrimination issues are not adequately addressed.
“OECD countries recognized such risks early on, adopting the OECD AI Principles in 2019 – the first intergovernmental standard on AI,” says Caira. “The Principles promote the use of AI that is innovative and trustworthy and that respects human rights, including gender equality, and democratic values.”
Caira says in her work she has seen worrying examples emerge around gender bias. AI-generated content can produce fake news, deep fakes, and other manipulated content so convincing that it is impossible to tell them apart from real ones. Without appropriate guardrails for the use of such tools, their increasing prominence can worsen certain risks like gendered mis- and disinformation, she says.
“Women, particularly those in politics and other leadership positions, are increasingly targets of gendered mis- and disinformation campaigns. This phenomenon tends to be even more pronounced for female political leaders from minority groups who are highly visible. These risks have a silencing effect on women, about half of the world’s population, leading many to disengage online and even to avoid politics and leadership positions altogether.”
In the room where it happens: Women lacking in AI roles
It’s not just within the technology where bias and an underrepresentation of women is a concern. Jun, who on the job is one of only a few women says, her observation is that
AI is shaping up to be another male-dominated “gold-rush” niche in technology like we have seen in the past with the crypto and blockchain craze, and with the emergence of quantum computing.
.
“If you look at most startup founders in the AI space, they’re mostly white dudes,” says Jun. “The problem with not having women in AI roles, and especially roles in power, is that they are often left out of the decision making process with how these AI technologies are built, developed, used, and deployed.”
Without an equal representation of women and other traditionally underrepresented minorities in roles to shape policy and design, Christina Liaghati, AI Strategy Execution and Operations Manager of MITRE’s AI and Autonomy Innovation Center, says it means flawed design and an overall negative impact on overarching solutions.
“You are going to have a better product solution if you have a more diverse group in the room shaping that,” says Liaghati.
Liaghati, who leads a team of scientists and engineers in charge of AI assurance, says she has specific concerns about a lack of women in AI roles when it comes to AI security and potential vulnerabilities in the AI space. If an organization is deploying a tool that might interact with a human, for example, there are a set of perspectives needed from multiple genders, and also cultures, that could be overlooked if left to a working group that lacks diversity.
“It’s also about diversity from a cultural and technical perspective that can prevent catastrophic failures from the deployment of these systems,” she says.
But getting more female seats at the AI table will likely be a struggle for several years. A 2020 World Economic Forum report found that just over a quarter of AI professionals globally were women.
“When we look at the data, unfortunately we still see a gendered gap in terms of who has the skills and resources to effectively leverage AI,” says Caira. “In many countries, women still have less access to training, skills, and infrastructure for digital technologies. They are still underrepresented in AI R&D, while stereotypes and biases embedded in algorithms can prompt gender discrimination and limit women’s economic potential.”
Caira notes that in OECD countries, more than twice as many young men than women aged 16-24 can program, an essential skill for AI development. And In 2019, women represented only 18% of C-suite leaders in AI companies and top start-ups worldwide.
“In 2022, only one in four researchers publishing on AI worldwide was a woman. While the number of publications co-authored by at least one woman is increasing, women only contribute to about half of all AI publications compared to men, and the gap widens as the number of publications increases.”
What will it take to bridge the AI gender gap?
Liaghati, for her part, is passionate about lowering the barriers that prevent people from pursuing AI as a career. She strives professionally to take part in initiatives that aim to make the field less intimidating to get involved in.
As more industries use AI in development and products, she is optimistic it will prompt more women in to get involved in AI roles.
“This is a numbers problem,” says Liaghati. “We have to get women involved in the field. That means starting in middle and high schools to get females interested and getting people from other professions interested AI roles now. That will allow us to tap into more gender diversity. “
Caira is heartened by what she says is a new focus on policies and programs that support digital and AI-specific skills development for women to help them succeed in labor markets - and sees promise in the activities taking place in many countries when it comes to AI and diversity.
“Governments are already proposing and implementing thoughtful policies to address AI skill development for women across OECD countries,” she says. “This includes programs focusing on digital skills, supporting women in AI R&D, AI commercialization and ensuring harmful gender stereotypes and biases are kept in check. The bottom line is that to reap the full benefits of diversity, women and gendered groups must meaningfully participate in the AI ecosystem today.”
(This feature is part of a larger content package honoring SC Media's 2023 Women in IT Security. Click here for full coverage)