ALGORITHMIC DISCRIMINATION

ALGORITHMIC DISCRIMINATION

THE

PROBLEM

Using a crime predication algorithm, a police department generates a list of 1.000 people with the highest chance of committing a violent crime in a specific community. While the department claims the algorithm doesn’t directly use race, gender or neighborhood details, it is a machine-learning algorithm based on data produced by people – and so therefore it reflects people’s biases. As a result, most of the individuals listed live in a specific zone in the city and have the same race and background, even though when analyzing the profiles objectively some of the people have no reason to be included on the list.

This is a classic example of algorithmic discrimination. With the rise of big data, complex algorithms have been created to enable more efficient, rapid and better decision-making. But, as these algorithms are adaptively created to learn from humans, they also absorb human biases, often related to race, gender, age or nationality.

Algorithmic discrimination is not merely a technological problem – it is the result of broader societal issues with unfortunate technological ramifications. It clearly shows how the lack of impact assessments in technological development affects vulnerable parts of society. It is a convoluted problem affecting all of us in our daily lives, whether we have noticed it or not.

KEY

CONCEPTS

Discrimination

Individuals or groups could be the victims of algorithmic discrimination based on characteristics such as race, age, gender, sexual orientation or religion. Simple attributes of a person, like names, are enough to cause algorithms to make unfair decisions that could prevent individuals from having equal opportunities, from getting a loan to being selected for a job.

Opacity

Algorithmic discrimination is a direct consequence of a lack of transparency. Algorithms are often subjected to audits, not only in technological terms but also in order to guarantee objectiveness. Nonetheless, the same doesn’t usually happen with machine-learning and the algorithm training process. It is precisely in these processes that the discrimination originates. Instead of taking advantage of the technological possibilities that could guarantee transparency and objectiveness in decision-making, algorithms are reproducing human bias. Public authorities must be held accountable and guarantee transparency with algorithmically driven  decision-making processes, and guarantee that the citizenry can obtain information on how these processes work.

Redress

Public authorities must provide mechanisms that will allow for the algorithms used in decision-making to be challenged and questioned, as well as redress for individuals or groups affected by algorithmic discrimination. These mechanisms could be constructed in many different ways – what’s important is that the institutions using machine-learning algorithms are held accountable, and that inspection processes are put in place to detect irregularities.

ADVOCACY

ACTIONS

How to deal with algorithmic discrimination?

Societal impact analysis

Training and awareness-raising

Algorithmic audits, validation and testing (including the machine-learning process)

Algorithmic transparency

Debate and discussion on personal data processing

Redress mechanisms

Accountability

Interested in the topic?