Algorithms, and the data they process, play an increasingly important role in predictions, decision making and recommendations. Many already receive innocuous book recommendations, a more efficient route to a destination, or even a winning strategy for a game of Go. However, what about algorithmically enhanced decisions that determine one’s college admittance, loan approval, or job prospects?
Even though the public largely believes that machines are neutral arbiters, entities that always make the right decision or see patterns that our human minds can’t or won’t. But are they really neutral? Or are algorithms a way to amplify and extend the biases and discrimination that is prevalent in society? Major advances in machine learning have encouraged corporations to rely on Big Data and algorithmic decision making with the presumption that such decisions are efficient and impartial. But this trend has also given rise to calls for greater accountability in algorithm design and implementation, and concern over the emergence of algorithmic discrimination.
The truth is that algorithms are social constructs as much as mathematical calculations. As with any other technology, they both capture and reproduce social dynamics, but these social dynamics often become untangled in obscure technical debates. However, an algorithmic audit and a multidisciplinary approach are new and innovative ways to seriously and rigorously frame this issue as well as understand what algorithms are and the roles they play in our society.
With the rise of big data, complex algorithms have been created for decision-making, with purpose of making efficient, rapid and better decisions. As these algorithms are adaptive created to learn from humans, they also learn human biases, commonly related to race, gender, age or nationality, for instance. Aiming to get some light to these new challenge we promoted these worldwide observatory that browses more than 90 cases of algorithmic discrimination having a real impact in different social sectors.
Thanks to artificial intelligence, algorithms can be trained by and learn from data. But what an algorithm does depends a lot on how good the data is. We’ve focused in when data is collected, pre-processed and stored, to capture data problems before the algorithms kicks in. In this sense, we have seen how bad data is playing an important part of all kinds of decision-making processes and outcomes, and has an important impact on our most fundamental rights.