One of the first cases of algorithmic bias took place in the 1970s at St. George’s Hospital Medical School in the United Kingdom. Hoping to make the application process more efficient and less burdensome on the administration, St. George’s deployed a computer program to do initial screenings of applicants. The program was trained from a sample data set of past screenings, analyzing which applicants had been historically accepted to the medical school. Due to learning from this data set, the program would go on to deny interviews to as many as 60 applicants because they were female or had names that did not sound European (Garcia 2017). In 1988, the United Kingdom’s Commission for Racial Equality charged St. George’s Medical School for practicing racial and sexual discrimination throughout its admissions process (ibid). While the St. George’s had no intention of committing racial and sexual discrimination, its new computer program had learned from a structurally biased admissions process and sought to duplicate it (ibid). Despite this case occurring in the 1970s, today’s algorithms still commit the same grave problem as the computer program at St. George’s: they learn to reproduce structural inequalities from historical data sets.