Uber’s automated recruiting tool discriminates against women and people of color

Women, among other marginalized communities, often have difficulties finding work due to inherent algorithmic biases (Bharoocha 2019). A brief overview of Uber’s technical workforce illustrates this point perfectly. In 2017, the company’s technological leadership was entirely composed of White and Asian persons, with 88.7 percent of employees being male. Uber utilizes an algorithm that they claim selects top talent and speeds up the hiring process. The system evaluates the resumes of previous successful hires at Uber, other hiring data, and select keywords that align with the job description (ibid). Given the fact that the majority of Uber’s previous successful hires were White and Asian males, it makes sense that the algorithm continues to discriminate against women and persons of color: the algorithm has been trained from biased data, and thus reproduces such bias. Textio, a firm that helps companies implement gender-neutral language in job descriptions, has demonstrated how many keywords used in job descriptions have the tendency to speak to a male audience, implicitly excluding nonmale audiences. For example, the words, “enforcement” or “fearless” attract male job applicants more than female applicants (ibid). Thus, an algorithm that operates on the grounds of fairness and objectivity can reinforce the structural biases and discriminatory social practices that were also present in low-tech hiring processes.