The uncovered outcome of machine learning algorithms is the so-called algorithmic bias or discrimination and the harmful effects produced for such bias over society are what we refer to as social impact. There are many sources of bias that statistically can affect algorithm outcomes throughout their life-cycle but their social impact will be constrained by the social meaning of the discrimination occurred.
For those reasons it is so important to bear in mind that despite their seemingly neutral mathematical nature, an algorithm developed for a concrete service or product taking all reasonable and prudent steps in its design so it correctly achieves its “function”, may produced and reproduced bias that end up being discriminatory against traditionally excluded social groups, like minority ethnic/religious groups or people on the edge of poverty.
The Social Impact of Algorithms
Having processed more than a hundred algorithms of different kinds and aiming to tackle algorithmic bias rigorously and systematically, the Eticas Foundation team has defined the following discrimination taxonomy.
Interested in our work?
You can collaborate with the project by sharing with us algorithms that are being implemented around you or by using the information in this directory to foster changes in your community.