Using a crime predication algorithm, a police department generates a list of 1.000 people with the highest chance of committing a violent crime in a specific community. While the department claims the algorithm doesn’t directly use race, gender or neighborhood details, it is a machine-learning algorithm based on data produced by people – and so therefore it reflects people’s biases. As a result, most of the individuals listed live in a specific zone in the city and have the same race and background, even though when analyzing the profiles objectively some of the people have no reason to be included on the list.
This is a classic example of algorithmic discrimination. With the rise of big data, complex algorithms have been created to enable more efficient, rapid and better decision-making. But, as these algorithms are adaptively created to learn from humans, they also absorb human biases, often related to race, gender, age or nationality.
Algorithmic discrimination is not merely a technological problem – it is the result of broader societal issues with unfortunate technological ramifications. It clearly shows how the lack of impact assessments in technological development affects vulnerable parts of society. It is a convoluted problem affecting all of us in our daily lives, whether we have noticed it or not.