Discriminating Systems – Gender, Race, and Power in AI


Discriminating Systems – Gender, Race, and Power in AI


Sarah Myers West, AI Now Institute, New York University

Meredith Whittaker, AI Now Institute, New York University, Google Open Research

Kate Crawford, AI Now Institute, New York University, Microsoft Research

April 2019


Research Finding

There is a diversity crisis in the AI sector across gender and race. Recent studies found only 18%of authors at leading AI conferences are women1, and more than 80%of AI professors are men2. This disparity is extreme in the AI industry3: women comprise only 15% of AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities. For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. Given decades of concern and investment to redress this imbalance, the current state of the field is alarming.

The AI sector needs a profound shift in how it addresses the current diversity crisis. The AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality. Further, many researchers have shown that bias in AI systems reflects historical patterns of discrimination. These are two manifestations of the same problem, and they must be addressed together.

The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI. The vast majority of AI studies assume gender is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity.

Fixing the ‘pipeline’ won’t fix AI’s diversity problems. Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether.

The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. The histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots4, predict ‘criminality’ based on facial features5, or assess worker competence via ‘micro-expressions6.’ Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.

  1. Element AI. (2019). Global AI Talent Report 2019. Retrieved from https://jfgagne.ai/talent-2019/
  2. AI Index 2018. (2018). Artificial Intelligence Index 2018. Retrieved from http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf
  3. Simonite, T. (2018). AI is the future – but where are the women? WIRED. Retrieved from https://www.wired.com/story/artificial-intelligenceresearchers-gender-imbalance/.
  4. Wang, Y., & Kosinski, M. (2017). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of
    Personality and Social Psychology.
  5. Wu, X. and Zhang, X. (2016). Automated Inference on Criminality using Face Images. Retrieved from https://arxiv.org/pdf/1611.04135v2.pdf.
  6. Rhue, L. (2018). Racial Influence on Automated Perceptions of Emotions. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_

2012 yılında Japonca eğitimim sonrasında hukuk fakültesine başladı. Jürging-Örkün-Putzar Rechtsanwalte (Almanya), Güler Hukuk Bürosu ve Ünsal & Gündüz Attorneys at Law' da staj yaptı. Japon dili sertifikası aldı. Ayrıca arabuluculuk- tahkim ve ceza hukuku gibi alanlarda sertifika programlarına katıldı.Bunların akabinde Bilişim ve Teknoloji Hukuku alanında yüksek lisans yapmaya başladı. Köksal & Partners hukuk bürosunda avukat olarak çalışmakta. Büyük bir merakla, robotlar, yapay zeka ve onların hukuksal durumları ve problemler ile ilgili çalışmalar yürütmekte. She studied law following herJapanese education on 2012. She fulfilled her internships in Jurging-Orkun-Putzar Rechtsanwalte(Germany), Guler Law Office and Unsal&Gunduz Attorney at Law . Also she has certificate of Japanese language and she has mediation and arbitration certificates and criminal law certificates from law workshops. Afterwards, she started the master program on information and technology law, at Istanbul Bilgi University. She works as a lawyer at Koksal & Partners law office. Her goal and ambition is the working in the field of Robotics, AI and their legal statutes and problems and exploring the relevant necessities where no women has ever gone before... Yazarın diğer yazıları için ayrıca bakınız: For further works of the author: https://bilgi.academia.edu/Selin%C3%87etin https://siberbulten.com/author/selin-cetin/

Leave a Reply