Social Impact

The uncovered outcome of machine learning algorithms is the so-called algorithmic bias or discrimination and the harmful effects produced for such bias over society are what we refer to as social impact. There are many sources of bias that statistically can affect algorithm outcomes throughout their life-cycle but their social impact will be constrained by the social meaning of the discrimination occurred.

For those reasons it is so important to bear in mind that despite their seemingly neutral mathematical nature, an algorithm developed for a concrete service or product taking all reasonable and prudent steps in its design so it correctly achieves its “function”, may produced and reproduced bias that end up being discriminatory against traditionally excluded social groups, like minority ethnic/religious groups or people on the edge of poverty.

Algorithmic Discrimination

A biased algorithm is an algorithm that systematically and unfairly discriminates against certain individuals or groups of individuals in favour of others. A system discriminates unfairly if it denies an opportunity or a good or if it assigns an undesirable outcome to an individual or group of individuals on grounds that are unreasonable or inappropriate’

Friedman, B., & Nissenbaum, H. (1996)

The Social Impact of Algorithms

Having processed more than a hundred algorithms of different kinds and aiming to tackle algorithmic bias rigorously and systematically, the Eticas Foundation team has defined the following discrimination taxonomy.

Racial discrimination

Racial discrimination refers to discrimination against individuals on the basis of their race, colour, descent, national or ethnic or immigrant status. A machine learning biased example would be some recidivism-predicting systems, which have been proven to be inefficient and racially biased against some population.

Gender discrimination

Discrimination based on gender is a common civil rights violation based on the sex of an individual. In machine learning, gender discrimination can take the form of employment searching algorithms offering less paid jobs or worst job opportunities to women than men, based only on gender.

Socioeconomic discrimination

Socioeconomic discrimination, also known as classism, is the prejudice against individuals based on their social class. Insurance companies using machine learning algorithms to mine data such as shopping history to recognize high-risk customers patterns and charged them more, could be a suitable example.

Religious discrimination

Worship or religious discrimination is treating a person or group differently because of the beliefs they hold. As an example, researches have shown that some machine learning algorithms were using related Muslim community words to search for misconducting and potentially risky behaviours on the social media.

Inequality reproduction

Social reproduction is the transmission of social inequality from one generation to the next through present structures, beliefs and actions. The continued use of biased algorithms, that perpetuate inequalities can cause negative spirals of exclusion.

Impact on democratic processes

The right to access to trustful information and to political active participation are democractic practices, challenged by the use of machine learning algorithms that strengths polarization in our societies disabling us to read posts from different or opposite ideas.

Impact on privacy

Rapid and automated decisions being made without the benefit of human judgment are in many cases affecting the right of individuals to their own intimacy. This is also boosted by the large amounts of data processed by algorithmic systems, which allow to infer highly sensitive and private information about individuals on the basis of social media data.

Interested in our work?

You can collaborate with the project by sharing with us algorithms that are being implemented around you or by using the information in this directory to foster changes in your community.