Photo by @neuvalence

Artificial Intelligence and Fairness

Inequality in the field of AI, Gender Theory and Action

One of the most, if not the most important subject in the field of artificial intelligence (AI) is fairness. Fairness is: the impartial and just treatment or behaviour without favouritism or discrimination. This is of course not an easy task, yet has to be addressed and continuously worked with, since we all deserve to be treated fairly. However what is fair? Let us now consider the discussions surrounding gender equality in the field of AI.

Gender equality, also known as sexual equality or equality of the sexes, is the state of equal ease of access to resources and opportunities regardless of gender, including economic participation and decision-making; and the state of valuing different behaviours, aspirations and needs equally, regardless of gender.

The Current Inequality in the field of AI

I would like to present a few key stats from EqualAI’s website. It is an initiative focused on correcting and preventing unconscious bias in the development of AI. With leaders across business, technology and academia they are developing guidelines, standards and tools for equal representation in the field of AI.

  • In 1984, 37 percent of computer science majors were women, but by 2014 that number had dropped to 18 percent.
  • The computing industry’s rate of U.S. job creation is three times the national average, but if trends continue, the study estimates that women will hold only 20 percent of computing jobs by 2025.
  • As few as 13.5% of the machine learning field is female, and only 18% of software developers and 21% of computer programmers are women, according to some estimates.
  • Only 4% of Americans could name a female leader in tech — a quarter of which named “Alexa” or “Siri”.
  • 4 of the 4 top in-home virtual assistants have been created with a female gender.

The founder of EqualAI has written research on this topic, and one of her articles is named Genderizing HCI (Human-Computer-Interaction). In this she mentions two typical approaches:

  1. To value traditional femininity.
  2. To deny differences between men and women.

She claims says: to be designed for can be dangerous and that to be stuck in a room for focus groups is okay, but that implementing things yourself is power. So what can be done further to this?

Another prominent place where equality in AI is studied is The AI Now Institute at New York University. It is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. They released a report called Discriminating Systems: Gender, Race, and Power in AI written April 2019. The findings are as follows:

  • There is a diversity crisis in AI sector across gender and race
  • The focus on women in tech is too narrow and likely to privilege white women over others.
  • The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity
  • The focus on pipeline in recruitment has not addressed deeper issues in working cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenisation that cause people to leave or avoid the sector.
  • AI systems classification, detection, and prediction of race and gender need urgent re-evaluation. Appearance classification is scientifically flawed and easily abused. Predicting sexuality, criminality or competence (through micro-expressions) are replicating patterns of bias that may justify historical inequality.

The recommendations for addressing bias in systems were listed as: (1) transparency publicising where AI systems are used and for what purpose; (2) rigorous testing, trials and ongoing measures; (3) bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context — this necessitates a wider range of disciplinary expertise; (4) include assessments of whether certain systems should be designed at all, based on a thorough risk assessment.

Algorithmic Accountability Act

Us lawmakers are proposing a bill that would require large companies to audit machine learning-powered systems. Large in this case would mean (A) $50,000,000 in average annual gross receipts for the 3-taxable-year; (B) possesses or controls personal information on more than 1,000,000 consumers or 1,000,000 consumer devices; © owned by a company meeting the previous requirements; (D) is a data broker or commercial entity. The bill would thus require companies to: “study and fix flawed computer algorithms that result in inaccurate, unfair, biased, or discriminatory decisions impacting Americans.”

Gender Theory in Machine Learning

It was argued in May 2018 by Susan Leavy that over-representation of men in the design of artificial intelligence can quietly undo decades of advances in gender equality. This was published in a paper at the International Workshop on Gender Equality in Software Engineering. If big data is ‘laden’ with stereotypical concepts of gender it will perpetuate a bias that may disadvantage women. At the time of writing she presented a few issues:

  • Facial recognition works best for white men.
  • Reflects the embedded values of its creators. Developers of artificial intelligence are overwhelmingly male.
  • Scoring systems are used to make decisions about finance, insurance and insurance. Which may be problematic with the given conditions.

As such she talks of possible existing solutions and propose a few:

  1. Improved assessment of training data.
  2. Incorporation of concepts of fairness in algorithms.
  3. Bring in research from decades of existing research on the relationship between gender ideology and language. This can be incorporated into machine learning from textual data.
  4. Review of learned gender-based associations.

Gender Bias can be discovered in naming — terms used to describe groupings of men and women; ordering of items in lists (such as male first); descriptions; gendered metaphors (bossy, bitchy etc.); and presence of women in text (frequency).

Therefore identifying gender bias is a complex, but not impossible task. She argues and I concur: “Advancing women’s careers in the area of Artificial Intelligence is not only a right in itself; it is essential to prevent advances in gender equality supported by decades of feminist thought being undone.”

The Making of Computer Scientists

We can change a lot in companies however there is an argument to understanding more in depth how we address the education of computer scientist and related disciplines. After all what we learn and the way these different professionals are conditioned shapes the products, bias and strategies that are being made.

Samantha Breslin recently completed here PhD in anthropology studying the education of computer scientists in Singapore. Her paper should of course be read at length however it brings across an important point. I draw upon parts of her conclusion:

This “becoming” represents progression for these students towards embodying and performing a computer science habitus that is aligned with the hegemonic personhood […] the performative qualities of the hegemonic personhood were in some sense dynamic, able to accommodate a variety of forms of expression as students progressed through their studies and transition to work, while at the same time summoning a sameness in students’ intra-actions with computer science knowledge, projects of self-cultivation, and affective gendered performances. Yet, diversity does not mean equality, and entrepreneurs and hackers continued to represent exceptionalism — both in terms of being different from, and better than, the average student — embodying the right-tail end of the bell-curve in terms of grades, passion, and performance.

Habitus in sociology is ingrained habits, skills and dispositions. It is the way that individuals perceive the social world around them and react to it.

Breslin describes how perceptions of gender, as a binary (man/woman) is being reproduced through teaching practices. These categorically present masculinities and femininities which shapes research on gender. Reality is translated into computing worlds where where the computer scientists are the magicians. There are policies, transnational discourses and practices that shapes this.

There is a: “(re)production of heteronormative gender binaries as part of (trans)national computer science education […] the computer science discipline is implicitly reproduced as a teleological narrative of American and Western progress.Additionally she argues that in this given case the of teaching computer science knowledge as ahistorical and acontextual reproducing an anti-politics of gender and programming.

This is day 49 of #500daysofAI

What is #500daysofAI?
I am challenging myself to write and think about the topic of artificial intelligence for the next 500 days. Learning together is the greatest joy so please give me feedback if you feel an article resonates with you. Thank you for reading



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Moltzau

Alex Moltzau


AI Policy and Ethics at Student at University of Copenhagen MSc in Social Data Science. All views are my own.