HomeHome InsuranceThese Women Refuse To Be Hidden Figures In The Development Of AI

These Women Refuse To Be Hidden Figures In The Development Of AI


Hidden Figures, written by Margot Lee Shetterly, tells the true story of African-American women mathematicians who worked at NASA in the early days of the U.S. space program. These women and their contributions were often overlooked, and they lived in a time when both racial and gender discrimination were visibly common. Despite this, these three women made significant contributions to the success of NASA’s missions, including the Mercury and Apollo programs. Their achievements, however, were hidden from the public view; their stories were not well-known until much later.

This déjà vu is being replayed today. It was a slap heard in AI and across the tech community– delivering a sharp blow to the many women who have tirelessly contributed to advancements in artificial intelligence. A recent New York Times article, regrettably, overlooked the monumental contributions of prominent women such as  Fei Fei Li, Timnit Gebru, Joy Buolamwini, Abeba Birhane, Margaret Mitchell and many others. This narrative played out against the backdrop of the dramatic resurgence of Sam Altman at Open AI, with the backing of Microsoft, a mere five days after being ousted.  

This convergence of events shed light on a disconcerting reality—the increasing marginalization of women in artificial intelligence. Despite the relentless efforts of women, including scientists, engineers, policymakers, and AI professionals, a glaring lack of recognition and lack of respect persists for their work, and their voices, that seems emblematic within both industry and media. The week that thrust Sam Altman into the limelight, drew both industry praise and criticism as two female board members found themselves unceremoniously ousted from OpenAI. Within one week, profit became the decisive winner in a match that out-gunned ethics and governance and left Altman relatively unscathed.

This has sparked a heated debate, not only within the AI world, but across industries, illuminating an ongoing struggle for the deserved recognition by many women and the non-binary community. The perpetual underrepresentation of these critical voices in the face of substantial progress unfolds at a critical juncture when questions about the implications of advanced artificial general intelligence (AGI) on humanity are gaining prominence. 

I reached out to Theodora Lau, Founder of Unconventional Ventures, a public speaker, and an advisor. She is the co-author of The Metaverse Economy and Beyond Good. Lau’s recent blog post, Where are the Women? had me nodding my head and questioning why this is happening today.

I also reached out to voices within the tech and AI community to weigh in on these contentious events and the implications for AI development where representation matters as these technologies are built: Karen Bennet, VP Engineering xplAInr.AI, former IBM Red Hat, and Vice-Chair of IEEE SSIT Committee; Margaret Mitchell, Researcher and Chief Ethics Scientist at HuggingFace; Volha Litvinets, Senior Risk Consultant at Ernst & Young; Mia Dand, Founder at Women in AI Ethics, Stephanie Lipp, Founder, MyCoFutures; and Staci LaToison, Founder, Dream Big Ventures, Kelly Lyons, Interim Director, Schwartz Reisman Institute for Technology and Society and Victoria Hailey, CMC of The Victoria Hailey Group Corporation.

As these events unfolded, what went through Lau’s mind was, “Yet again!!”

“Those were the two words that came to my mind. I think for a lot of us who have been watching how the tech sector has been evolving, not just this year but even years before this, we’re all very familiar with the tune of, ‘We’re not here. We’re not at the table.’ What makes this hurt more than others is what it means now is no lack of hype that talks about how artificial intelligence will change everything we do, how we work, how we live… For something to be transformative that will impact everyone on the planet, you can’t say this is a shared future unless people are represented at the table when those decisions are being made.”

Lau further argued that what a technology enables includes whose interest it serves and disregards who might be harmed in the process. The replacement of two female board members who were unable to depose Altman as CEO was a clear demonstration of an organization that would ultimately serve its own interests. This begs the question whether effective governance at Open AI exists, with no real separation between the profit and non-profit side, no diverse perspectives, and no real accountability as the technology moves forward.

Meredith Whittaker, the President of Signal on a recent Wired article, expressed skepticism about the OpenAI debacle and cast doubt whether adding a single woman or person of colour to the board would lead to meaningful change. She also questioned whether an expanded board would be genuinely capable of challenging Altman and his allies, arguing that checking off a box for diversity without challenging the existing power structure would “amount to nothing more than diversity theater.” She said, “We’re not going to solve the issue—that AI is in the hands of concentrated capital at present—by simply hiring more diverse people to fulfill the incentives of concentrated capital.” 

Staci LaToison, Founder of Dream Big Ventures, is an investor as well as a catalyst for change, investing in women-led startups and diverse groups through capital and empowerment. For LaToison, the ousting of the two female board members was not only disappointing, but it was also “regressive”. LaToison emphasized its importance in technological progress, “Diverse boards are not ‘nice-to-haves’; they are a necessity for any organization that claims to be forward-thinking and innovative… diversity in leadership leads to better decision-making, greater creativity, and improved financial performance…”

Victoria Hailey, CMC of The Victoria Hailey Group Corporation and International Convener ISO/IEC JTC1/SC7/WG10 Maturity Study Group, worked at IBM for many years. She understood very early on that IBM’s structure was a male construct. Women were expected to work in technical roles as leadership opportunities were only reserved for men. Hailey became an auditor where she assessed processes and systems across different business models. According to Hailey, the environment has since changed,

“In the IBM days, technology development centered around robust processes, with a focus on testing and meeting customer requirements. Over time, there has been a shift away from prioritizing immediate customer needs to a primary emphasis on speed to market, being the first to introduce products. This departure has discarded the established principles, skills, tools, processes, models, and frameworks developed over the past three decades to ensure reliable software deployment.

Corporate responsibility has shifted towards maximizing shareholder value, potentially neglecting customer satisfaction. This transition from prioritizing quality to a ‘first to market’ approach has abandoned what I refer to as ‘safety mechanisms.’ Here, I use the term not in the context of engineering safety methods but as safeguards that traditionally ensured the reliability of software releases.

This shift has resulted in notable consequences, particularly evident in platforms such as social media. The rapid advancement without looking back, lacks any sense of social responsibility, including considerations of the triple bottom line (social, economic, and environmental factors), and reflects a relentless pursuit without a comprehensive view. It’s the race to go forward and that is the male bias. That is the aggression, takeover mentality, and a drive towards AI dominance that has taken precedence, leading to a notable shift in perspective and real human consequences in the industry.”

The more things change…

The ingrained systems which favor ‘inherited male bias’ won’t be upended

Whittaker pointed out the incentives of “concentrated capital” will further hinder the necessary change. Lau admittedly recognized that change will not be immediate,

“When COVID hit, we thought it might level the playing field. Everyone had to use screens, maybe that would be a turning point. But in 2023, we’ve seen funding for women drop, and money go more to men instead of underrepresented founders. Now, companies are being sued for having diversity and inclusion programs. People claim it’s discrimination, even if those efforts are just crumbs compared to what’s needed. The problem is many are okay with the old boys’ club and the way things have been. They don’t see the need for change because it’s been working for them. This is a big issue, especially in AI. The tools are often developed by and for western cultures. They benefit those who speak English and are part of that culture. What about the rest of the world and those who don’t speak English as their first language? Technology can either bridge gaps or widen them… In the background, the same old group benefits from technology, keeping most of the capital. You don’t need a crystal ball to see where this is going.”

Who benefits? AI often favors those who created it. The systems that pervade society, whether it’s applying for credit or mortgage, applying for a job, seeking car or home insurance – these corporate policies and processes to adjudicate applicants have been honed over the years. Challenging an already well-oiled machine that minimizes risk to an organization and maximizes profits is paradoxical. Redlining –the systematic denial of mortgages, loans, and other financial services using location as a proxy for income– as Lau points out, was an accepted practice for many years, “It’s not technically allowed anymore, however the effects of historical discriminatory practices can be witnessed in disadvantaged neighbourhoods. This historical data, which reflects the discrimination against certain populations is now used in AI systems for lending decisions.” And this is the real danger when systems are not interrogated and when established practices cannot be questioned and ultimately allowed to persist.

The systemic practices that favor this ‘inherited male bias’ in our systems, according to this Guardian article, conveniently segues into media. The article references AKAS pronoun analysis in the GDELT Project’s online news database that revealed the following stats:

  • In 2023 “men have been quoted 3.7 times more frequently than women in the news about AI in English-speaking nations.”
  • “4% of news stories on science, technology, funding discoveries centered around women.”
  • “Female tech news editors represented only 18% and 23% respectively of tech editors in Britain and the US.”
  • “Men were 3-5 times more likely to decide what is deemed a technology story.”

Case in point: According to the same study by AKAS in 2023, “mentions of Altman in articles referencing AI are twice the combined total of 42 women in the recent Top 100 list of AI influencers in Time magazine.”

As per Lau, “It’s as if we don’t exist.”

Stephanie Lipp is CEO & Co-founder of MycoFutures, a clean tech startup, developing sustainable material from the root system of fungi. As a startup founder and woman of color, she is aware of the imbalance that exists in who gets funded in the startup ecosystem,

“These statistics reinforce long-acknowledged concerns about the way science, technology and innovation are shaped and legitimized by an incredibly narrow and persisting point of view.  One of the most serious consequences being that these spaces remain inherently exclusionary because for so long they [media spaces] thrived by correlating elitism with wisdom. 

Another consequence is that anyone outside of the narrow point of view– women, non-binary folks and people of colour–must always build more social capital, reputation and clout, and from the right places, to join the inner circle.  We are gaslit into thinking that it is simply a matter of sweat equity, that we just need to work harder, to meet the right people, to reach more milestones, and put ourselves out there, however the result is more often burnout than advancement.” 

Margaret Mitchell is the Researcher and Chief Ethics Scientist at Huggingface, and recently named to the Top 100 list AI Influencers in Time magazine. As a woman in AI, she is not immune to the lack of female representation in this space. In 2018, Wired estimated just 12% of leading machine learning researchers were women. The World Economic Forum, likewise in 2020 found that “women make up only 26% of data and AI positions in the workforce”. This, despite women representing ~47% of the US labor force, and (in 2019) women receiving majority of master and doctoral degrees from US institutions. Mitchell explains this disparity,

“As I’ve advanced in my career within AI and technology, I’ve watched brilliant colleagues around me bow out. These colleagues and friends have predominantly been women, LGBTQ+, and people with other culturally marginalized characteristics. This has meant that there are very few people within the higher levels in tech– the levels that determine culture and priorities– who have fundamentally different viewpoints from those currently in power. However, these viewpoints are critical if we want to advance AI in a way that takes them into account. We must take them into account in order to have technology that is maximally beneficial to all different kinds of people.

A key reason tech minorities leave is that the culture and environment aren’t very nice to them. And yet there is not enough care, nor even belief about the issues, from the majority of people in tech. Hence most people who are consistently marginalized just bow out. You don’t have to believe them, but they’ll just leave if you don’t.”

Women I have interviewed echo this sentiment. Women working in AI continue to struggle mentally. Work is stolen from them. Their voices are muted. You toe the line and maintain the status quo – that is the cultural expectation. Because of this, many who fear for their jobs refrain from speaking out.

Karen Bennet, is the VP Engineer at xplAInr.AI, former VP, Engineering at IBM, Red Hat  Officer and the Lead of many AI Working Groups with IEEE, ISO and Linux Foundation, Vice-Chair of IEEE SSIT Committee (AI Ethics, Metaverse and Environmental Sustainability), and member of EU AI Act, NIST EO task forces. Bennet is not new to environments where she has been the sole female engineer and has experienced similar challenges to those faced today. She knows there is work to be done, adding,

“… the narrative for women in AI is both inspiring and challenging. Many of us face hurdles, our work is sometimes eclipsed, and we endure the harsh reality of being discredited. Yet our strength prevails, and our brilliance persists. Women, much like myself, are not merely surviving; we are flourishing pioneers. We are navigating the intricate terrain of algorithms, code, and, perhaps most crucially, the regulatory nuances of AI technology to be ethical.  I’ve witnessed the struggles [of women] in both industry, academia, and regulations of AI, but I also see the resilience of the women who are working together to create a better world for humans by establishing guardrails for AI.”

Hailey, who helps organizations use, develop, and integrate AI technologies to achieve ethical and socially responsible objectives, concurs. In her experience, women in AI have often worked behind the scenes, employing the right protocols and patterns. However, merely discussing the rights of women without a fundamental change in approach will not lead to meaningful progress. To effect real change, Hailey sees women actively engaging at the technical level, challenging the prevailing governance and aligning it with values. She continues,

“The current aggressive ‘winner take all’ mentality has led to a drop in engineering discipline, exclusion of significant populations due to a male bias, and a reduction in the overall value of customer interactions. The focus on rapid market entry without thorough risk analysis has resulted in software releases that may pose harm. Attempting to circumvent this process by injecting assets like social responsibility and morality is an effort to correct the course.

Unfortunately, essential disciplines such as employee training and ethics are often discarded once negative repercussions emerge. This lack of corporate oversight and disregard for potential risks sets the stage for disastrous consequences. It’s alarming because we are being taken down a risky path without collective agreement.”

Models are developed within the very systems that are already known to us

Generative AI emits outcomes from structures that have been normalized. Mitchell underscored the churn of women, LGBTQ+, and those underrepresented within the AI community and if it continues it all but guarantees that the status quo remains. If the very structures in media and in industry continue to marginalize the very people whose inputs are required to create models and systems that are societally representative and valued, they will guarantee that Generative AI’s risks and direction will continue to be shaped by white men.

Wikipedia, a stunning example of gender Bias, is also the “most important single source in training of AI models.”

Volha Litvinets is a Senior Risk Consultant at Ernst & Young. I met Litvinets during a Women in AI Ethics summit, and she coaxed me into helping her with a project on Wikipedia. In 2019, Litvinets attended a UNESCO event focused on the regulations of emerging technologies, and there stumbled into a Wikipedia workshop, dedicated to creating biographies of Women in STEM. This endeavour exposed her to the Wikipedia gender gap. In 2018 among English Wikipedia editors, 84.7% reported their gender as male, 13.6% as female and 1.7% as other. A year later, Katherine Maher, then CEO of Wikimedia Foundation, said her team’s working assumption was that women make up 15–20% of total contributors.

In 2021, study on the Gender inequality, notability and inequality on Wikipedia revealed the following, citing one of the “most pervasive and insidious forms of inequality”:

In April 2023, Wikipedia reported close to 4.5 billion unique global visitors. Wikipedia has versions in 334 languages and more than 61 million articles, ranking consistently among the world’s 10 most visited websites along with Google, Meta and Youtube.

According to the NY Times, “Wikipedia is probably the most important single source in the training of A.I. models… Without Wikipedia, generative A.I. wouldn’t exist”.

For Litvinets, the effort to make significant change was a daunting task,

“Little did I know then, Wikipedia maintained tricky rules for biography publication. To create an article, one had to be an experienced editor with over 300 edits and a biography should meet the ‘notability requirements,’ necessitating confirmation from reliable sources like interviews and credible references. The criteria for notability often hinged on a person’s fame, but the question arises: who gets to decide who is considered to be famous?”

As a member of Women in AI Ethics, she proposed to spearhead a project to create biographies from the list of 100 Brilliant Women in AI Ethics. I collaborated with Litvinets, Erik Salvaggio and Catherine Yeo, conducting workshops to raise awareness and recruit editors. It wasn’t easy, as per Litvinets,

“Unfortunately, our initial attempts were thwarted as articles were continuously deleted for not meeting the notability requirements. I was thinking, ‘we needed to master the intricacies and do better, with a clearer understanding of the rules of the game.’

Despite our efforts and Wikipedia Women In Red, an initiative focused on remedying the ‘content gender gap’ and transforming red links (unapproved profiles) to blue ones, with the emergence of Generative AI, Litvinets recognizes a much larger risk when platforms like Wikipedia are widely used for large language models: “This is resulting in the reproduction of historical biases and the amplification of inequalities. This means the issue is increasing exponentially, making existing inequalities in the content even more amplified. The challenge now is to improve how we train these AI models to mitigate biases and contribute to a more equitable and inclusive digital landscape.”

Today it’s a race to chase these models, lessen their risk playing an endless game of whack-a-mole because of the harms already released from this Pandora’s Box. Hailey emphasized that the social safeguards that were once there are now absent, “The conventional safety principles in software development and engineering, which adhere to a “first, do no harm” philosophy rooted in traditional safety and safety engineering, would typically provide a framework. This framework helps in recognizing and addressing risks, especially concerning vulnerable populations. It involves understanding who the stakeholders are and adopting a holistic systems approach to development.

Bennet and Hailey agree that women attack a problem very differently from their male counterparts. “Women are holistic. They are intuitive, …that’s why we’re in those positions to try to dismantle the system knowing that we still have to keep the infrastructure going, otherwise things will collapse.”

Women Have Made Significant Contributions

Women Have the Numbers but are Not Given the Podium

In the midst of all this, Mia Dand, founder of Women in AI Ethics, had just organized a five-year anniversary celebrating the 100 most brilliant Women in AI Ethics, and also addressed the issue of bridging the AI divide, focused on the communities most vulnerable as the pervasiveness of artificial intelligence seeps into every aspect of our lives. As per Dand there is no excuse for industry to not leverage the abundance of work that many women in AI have contributed over the years,

“The message from our recent Women in AI Ethics™ summit is clear – women refuse to be the hidden figures in AI. The lack of recognition for women’s contributions and media’s constant elevation of men as default tech experts has led to a false perception that women are not technical and not qualified to take on leadership roles in the tech industry. Rather than asking talented women to work even harder, the onus should be on the media, tech companies, and conference organizers to explain why they are continuing to exclude one half of humanity and one third of the tech workforce. Women in AI Ethics™ has done all the hard work for them; For over five years, we have published curated lists and developed a robust online directory of diverse experts in AI. Going into 2024, there is no excuse for any conference or company to have an all-white and all-male panel or team in a world filled with diverse experts.”

For Stephanie Lipp, she makes a concerted effort to be seen,

“I push myself to overcome imposter syndrome and allow myself to take up space and be seen.  It’s a very precarious time for startups and we have worked hard to not become a statistic of 2023, so it is sometimes challenging to be outspoken, as founders are so often reminded that relationships and reputation are everything, but I know it is important to be part of the growing voices for change.”

Karen Bennet refuses to retire as she is passionate in creating a better world for the next generation. She actively engages with diverse groups in thought and perspectives, “ to forge partnerships in establishing essential guardrails. These safeguards ensure that humanity remains firmly in the loop, guiding the trajectory of AI towards a future that is both innovative and ethically grounded.”

For Stacie LaToison, the importance on educating on financial literacy and AI to Latinas in tech is crucial to ensuring no one is left behind. “We’re equipping women with the necessary skills and knowledge to thrive in these fields… to not only learn, but build a community that supports and uplifts each other.”

Kelly Lyon is the Interim Director, Schwartz Reisman Institute for Technology and Society. Programs such as the Women in AI series in collaboration with Deloitte are “vital to surfacing important conversations and initiate meaningful connections,” adding,

“I have been fortunate to be part of a strong network of very smart, technical women in industry and academia. We may be smaller in proportion within the technical community, but we are large in our voices, our contributions, and our support for one another. The need for initiatives increasing diversity in tech—and especially in the world of artificial intelligence—is vital. We must ensure that the people working on AI systems reflect the concerns, experiences, and identities of the populations affected by these systems. Advocating for women in the AI sector requires a strong, united voice.” 

For Meg Mitchell, for women to thrive within AI, organizations need to focus on how to stop being exclusionary,

“Inclusion isn’t something you add on top of a given culture; it’s something that comes from actively removing exclusionary norms, which are harder to see the more “normal” they are. This includes everything from who gets invited to meetings, to who gets added to conversation threads, to who gets mentioned by name in conversations and how their work is described. How often are you in a meeting that is only men, and someone notices it and says something about it? Within tech culture, chances are that the majority of 0% women meetings are barely noticed as such — this is an example of a skewed (biased) norm that can be equalized with active effort.”

For Victoria Hailey, the role of the woman is fundamental in approaching AI from a holistic lens,

“The modeling for AI is akin to a dynamic system, involving both deterministic and non-deterministic elements. The critical issue here is the collective maturity of our species as Homo sapiens.

As a species, we are in a phase of uncertainty, with various groups contributing to a technology (AI) that is intended to mature. However, the pathway to this maturity remains unclear. Women inherently understand the entire cycle of growth from learning from mistakes to the nurturing required for maturity. It’s the process of mothering, guiding a child until it reaches a point of independence.

In the context of AI development, there seems to be a deviation from this approach. We pushed for AI, knowing the risks, yet we’re blindly hoping to control the fallout, like kids playing in a sandbox full of dynamite. Women, I believe, bring a unique understanding of the essential aspects of maturity. If we are relying on technology to play a fundamental role in society, it is imperative that the development and deployment of these technologies follow a trajectory of maturity, much like the nurturing process a mother provides for a child.

Mitchell sums it up quite nicely,

“It is possible to have environments within AI development that reflect the rich and diverse views of people all over the world, with all different kinds of life experiences, including women. But it requires a fundamental paradigm shift in who is permitted to have a seat at the table—and who is listened to—within influential discussions. This will come at the cost of top executives in the tech sector, or top university leadership in academia, needing to stretch beyond their “comfort zone” of whose voices to prioritize. For the benefit of humanity, that is a cost we must be willing to pay.”

Margaret Mitchell has provided the following resources on how to be aware of the role you may be inadvertently playing in creating in creating insular environments, such as by derailing critical conversations about inclusion, to strategies men in particular might use to raise up the voices of women and non-binary folks, to diagrams on how to have supportive conversations, how to understand the relationship between gaslighting and bias, and why appropriate promotional velocity for tech minorities is critical.

Follow me on Twitter or LinkedIn. Check out my website or some of my other work here. 





Source link

latest articles

explore more