Artificial Intelligence: A Mirror to the Patriarchy – GUEST POST by Lavanya Lakshminarayan (THE TEN PERCENT THIEF)
Artificial Intelligence: A Mirror to the Patriarchy
by Lavanya Lakshminarayan
AI-generated art has stirred significant controversy over the last year—weekend warriors have illustrated children’s books overnight using AI, and book covers have featured AI-generated art, giving rise to justifiable outrage among the artist community. More recently, Midjourney’s hyper-real meets uncanny valley images of women at a party went viral on Twitter. This was followed by AI-generated images of bikini-clad women posted on Twitter by @heartereum, which sparked the ‘it’s so over meme’ and served as a reminder of the unrealistic beauty standards women must contend with.
More disturbingly, the images shared by @heartereum give rise to some of the most critical—and often debated—questions surrounding Artificial Intelligence: who’s working on AI, what inputs are going into developing it, and who is it being built for?
Representation in Technology
Human beings are fundamentally biased products of their environments, cultures, and life experiences. Bias exists in many shapes and at varying degrees, from trivial, personal preferences that do no harm, like a distaste for vegetables, to extreme and hurtful forms including racism, misogyny, and transphobia. It’s still early days for the technology, but AI programs consistently demonstrate signs of being strongly impacted by the latter, and this is troubling as they’re already being used in fields like healthcare, finance, and the justice system, in various parts of the world.
Studies from multiple sources conducted in recent years indicate that at present, Artificial Intelligence specialists are anywhere between 74% to 91% men, with the remainder being women, and without much data available on specialists who fall outside the gender binary. This is not surprising given that reports also show only 26.7% of women hold tech-related jobs, with only one in four women in tech occupying senior positions—and distressingly, there’s a similar absence of data for individuals outside the gender binary. Much of the world has been designed by men, for men—and to be very specific, by cishet white men from erstwhile colonizer countries.
There’s extensive work documenting the impact this has had on a long list of marginalized individuals, across intersections of race, class, gender, caste, ethnicity, and sexuality. Gender-biased design includes everything from medical research conducted on men because women’s bodies are ‘too complicated’, to male-proportioned crash test dummies used in automobile design. Women and marginalized individuals have learned to live with it, and hopefully, will be able to push for a future where this isn’t the norm, including in the field of AI.
The potential fallout of gender bias when it applies to AI is amplified, because AI can be a powerful tool. In this piece, I intend to examine how gender-biased AI impacts women, and could cause extensive harm in the future if it isn’t addressed, with a slant towards Indian women living in India.
The Mathematics of Exclusion
Any Artificial Intelligence system derives its framework from the inputs it receives. It is ‘trained’ based on the sources of data fed into it, but who’s teaching it what it knows?
Human history is a chronicle of human biases as written by the victors, and women—especially women of colour—have largely been erased from its pages. Even with the most sophisticated filters possible—to weed out inputs that no longer fit modern sensibilities, from racist historical documents to hate speech on Twitter—biases persist in AI programs.
Over 50% of the workforce designing AI at this moment is male. Let’s go with the best-case scenario from the studies out there, and estimate this at a conservative 74%. This leaves about 26% of AI specialists to account for women, transgender and non-binary individuals. It’s difficult to find statistics on what percentage is BIPOC, what cultures they come from, what socio-economic backgrounds they represent. And most available data appears to center the United States, without accounting for the rest of the world, particularly in emerging economies, where sophisticated technology, including but not restricted to AI, is being developed, influenced by each culture’s biases, designed by a greater percentage of men in the workforce, but all dwarfed by the larger narrative centering the white Anglophone world. This is an instant red flag for anyone like me, a woman of colour writing out of India, where sex at birth, gender, caste, class, disability, and urban versus rural backgrounds all create intersections that determine education and privilege, not restricted to but including access to technology and the ability to influence its design. With each degree of marginalization, the agency to influence the framework of present-day AI design drops significantly.
What does this mean? As one moves further from the center of privilege—in this case, loosely assumed to be cishet white male, living in the United States—the probability of being able to shape AI programs to reflect one’s world view, and to divest it of its bias when it comes to people who occupy a similar intersectional identity—drops significantly. How does this impact design? In a word, badly.
The Consequences of Exclusion
Gender norms and stereotypes are reinforced every single day in the most mundane ways, with behaviours and values being gendered as masculine or feminine traits. Qualities like ‘helpfulness’ and ‘selflessness’ are perceived as feminine; ‘leadership’ and ‘assertiveness’ are perceived as masculine. In addition to influencing expected behaviours in society, these stereotypes erase non-binary and gender non-conforming individuals. These stereotypes also influence AI design.
A well-known example of gender-bias in AI can be found in everyone’s smartphone voice assistants—Siri, Alexa and Cortana. All of them have traditionally female names. When they debuted, each assistant featured a woman’s voice as their default setting.
Analyze the function of the voice assistant: their role exists only in relationship to a greater authority—the user, and they are expected to be helpful, solving problems when summoned. A 2019 UNESCO report concluded that they encouraged stereotypes of women as submissive and compliant. Each of these voice assistants has been upgraded, and now offers a number of different voice options, but the question remains: why did the default voice assistant present as a woman?
Binary gender stereotypes influence career choices and hiring decisions. A 2015 article in The Conversation, written by Karen Suthers, examines how by years 8-10, gender segregation with regard to the perception of future careers is prevalent in schoolchildren. Gender bias impacts hiring, with employers assessing potential candidates on different criteria, depending on gender, and showing preferences accordingly, as demonstrated by a study published in the European Sociological Review. Gender stereotypes have been found to impact hiring in the so-called gig economy, as well. All of this leads to fewer women and individuals outside the gender binary pursuing STEM education, and even fewer individuals from this marginalized pool being hired.
In male-dominated tech environments, when there aren’t enough women leaders on the floor to point out gender bias, the resulting filters that determine what AI should and should not be trained on is inevitably gender-biased, and given the current representation of marginalized identities in the tech workforce, this is troubling.
In my experience working in the gaming and tech startup industries—both white male dominated—gender biases managed to creep into multiple design meetings. I once worked on a game whose primary audience was women, a well-accepted fact confirmed by historical data. My team once pitched a new idea for it, based around a science fair. One of the executives we were pitching to—a white cishet man—opposed the idea immediately, on the basis that women wouldn’t be interested in science. It took all the other women in the room to talk him down off that ledge. The existence of data in the age of big data is meaningless if it isn’t accompanied by a diverse range of empathetic and sensitized perspectives weighing in on what that data means, and what outcomes need to come into effect from understanding it.
Returning to the well-documented gender-bias in voice assistants, in 2017, a study conducted by Leah Fessler of Quartz analyzed how Siri, Alexa and Cortana responded to gender-based harassment. All three voice assistants seemed coy, even occasionally grateful, when they were told they were ‘hot’ or ‘pretty’. Their responses have been upgraded to be more negative since, but again, one has to ask: why didn’t the AI know to shut down that line of conversation in the first place? What were the underlying assumptions about how women should respond that went into its design? And how did nobody question them?
Jenny Nicholson published a study in Made by McKinney, in which she operated Chat GPT 3 on its default online inputs, to analyze gender-bias in its output. A small sample from her findings reveals statements like ‘every man wonders why he was born into this world and what his life is for’ and ‘every woman wonders what it would be like to be a man’, among other heavily gender-biased conversations.
This is an outcome of what we have taught chat GPT-3 from its sources across the Internet. The good news is that each time gender-bias is reflected by AI, it shows humanity what biases exist within its understanding of the world. But with AI programs already in place in critical sectors, from finance to healthcare, there are very real-world consequences for people who are marginalized on multiple fronts by their intersectional identities.
The Future of Gender-Unbiased AI
Technology is only as robust as the thought process that goes into building it, and ultimately, this relies on people and the systems of knowledge that contribute to its development. Modern technology has been developed by a minority largely existent on intersections of privilege, and has then been implemented top-down, where the end-user cannot influence its form. This is a model that needs to be revisited when it comes to AI.
The statistics on present day AI development in the United States point to several problems in representation when it comes to the inputs AI programs are being trained on. These can only be addressed by providing AI programs with more nuanced inputs, through the use of highly sensitized filters, and this requires diverse developers on the floor representing multiple perspectives, including women, as well as individuals across the gender spectrum. For this to happen, the barriers to access technology, STEM education, and jobs in the AI workforce need to be lowered, and widespread reparations are required to ensure gender parity.
In India, the female literacy rate stands at 70.3%, according to a 2022 UNESCO report. Transgender literacy is at 56.1%. Social discrimination, unsafe public spaces for women, prioritizing male children, in addition to widespread socioeconomic disparity on the basis of caste and class, lead to high dropout rates. Each year, nearly 23 million girl children in India drop out of school when they begin their menstrual cycles, due to the lack of access to sanitary napkins.
A 2022 UNICEF report points to rising dropout rates of female school students post-pandemic. From among the women who go onto finish school and earn degrees, the percentage entering the workforce is declining. The biggest reasons appear to be societal: the pressure to get married, and familial pressure discouraging full-time jobs after marriage, encouraging gender-normative roles as homemakers.
If we examine the best-case scenario, it’s a handful of women who occupy intersectional privilege who might get to shape Indian-designed AI programs—assuming these women can overcome the existing gender biases when it comes to STEM education, hiring, being respected on the office floor, and rising to leadership positions. And of this handful of women, it’s a smaller percentage who might get to influence AI perspectives and design on a global scale. This needs to change.
The only way we can work towards unbiased AI systems is to address our own inherent biases first, and include as many perspectives as possible in doing so. In the meantime, we need to make the most of AI’s ability to show us an honest, unfiltered reflection of who we are, and learn from our mistakes.
Lavanya Lakshminarayan is the award-winning author of Analog/ Virtual: And Other Simulations of Your Future, featured on Tor.com’s Best Books of 2021 list. She’s a Locus Award finalist and is the first science-fiction writer to win the Times of India AutHer Award and the Valley of Words Award, and has also been nominated for the BSFA Award.
Her short fiction has appeared in numerous anthologies and magazines, including Someone In Time, The Gollancz Book of South Asian Science Fiction (Vol. 2) and Apex Magazine’s International Futurists Special Issue. Her work has been translated into French, Italian, Spanish and German.
She’s occasionally a game designer, and has built worlds for Zynga Inc.’s FarmVille franchise, Mafia Wars, and other games.
The Ten Percent Thief is available now, you can pick up your copy from Bookshop.org
[…] We’re thrilled that Lavanya has written the perfect guest post for Women in SFF – a look back on the women in SFFH that have influenced Lavanya over the years. Lavanya also wrote a brilliant piece for us for last year’s Women in SFF: Artifical Intelligence: A Mirror to the Patriarchy. […]