The implementation and use of algorithms by public entities and private organisations always have some kind of consequences on social life, as they are often based on data about the public and deal with the distribution of public resources and the delivery of public services. If the data are biased or if the algorithms don’t work as intended or are designed to discriminate among people in an unjust way, then the use of such algorithms will have a negative side and harmful effects, and that’s what we mean here by social impact: threat to people’s privacy; unjust discrimination based on gender, race, religion, socioeconomic status or other issue; reproduction of existing inequality; weakening of democratic practices; state surveillance…
There are different sources of potential bias that can statistically affect algorithm outcomes throughout their life-cycle, but the social impact will be constrained by the socially-given meaning of the resulting discrimination or harmful effect.
It is due to those reasons that we need to keep in mind that, despite their seemingly neutral mathematical nature, an algorithm developed for a concrete service and taking all reasonable steps in its design, may still produce and reproduce biases that unjustly discriminate against women, people of colour, minorities, the poor and other traditionally excluded groups.
The Social Impact of Algorithms
Having processed more than a hundred algorithms of different kinds and aiming to tackle algorithmic bias rigorously and systematically, the Eticas Foundation team has defined the following discrimination taxonomy.
Interested in our work?
You can collaborate with the project by sharing with us algorithms that are being implemented around you or by using the information in this directory to foster changes in your community.