Often, algorithmic systems are what’s known as “black boxes” because it’s not publicly known how they work: we don’t know what goes on inside the “box”. In the case of machine-learning algorithms, the systems may become black boxes even to the people who designed them, because the algorithms can rewrite their own rules. When such algorithms affect public life, then the public should be able to know how those algorithmic systems work and to make them accountable: we should be able to audit those algorithms.
If you are using or intending to deploy a new algorithm, or if you are concerned about a third-party algorithm, Eticas Consulting can help you. At Eticas, we are experts in the development of Algorithmic Impact Assessments, which can minimise the negative social impacts of algorithmic systems and identify and the main sources of bias, both internally –if you own the algorithm– and externally –if you are concerned about a third-party algorithm–.
By auditing an algorithmic system, we’ll be able to establish its social desirability, which will depend on its capacity to follow the legal requirements in a specific context –data protection and privacy– and to anticipate and mitigate undesired social impacts while complying with the purpose that it was designed for.
Auditing algorithms can also prevent biases, which are an iterative risk that encompasses all algorithmic life-cycle: that’s why it is important to be aware of that risk at every stage of the whole algorithmic process.
Interested in our work?
You can collaborate with the project by sharing with us algorithms that are being implemented around you or by using the information in this directory to foster changes in your community.