Methodology

The field of algorithmic systems, where different kinds of public and private organisations and entities develop, test and implement algorithms for all kinds of different purposes, has been developing very quickly in the last years. In part because of that reason, and also due to the fact that some technical expertise may be needed to understand the way algorithmic systems work, regulation about the use of algorithms lags behind their actual implementation, and the public debate about algorithms is not sufficiently informed or sometimes simply missing.

The OASI Register compiles information about algorithms with the aim of increasing public awareness and providing the necessary knowledge for an informed public conversation, and of making possible for experts and the public to search and analyse the use and social impact of algorithmic systems across the world.

To find out information about algorithms, we keep track of reports published in the specialised and also mainstream press as well as in academia, and we proactively reach out to people and organisations working in the field to gather data and information about algorithmic system and their functioning. We aim at registering algorithms that have been proved or shown to have some kind of negative social impact. Because we can’t track every single algorithmic system that’s being developed, we try to add algorithms that are representative of the different domains, aims and social impacts in the whole field; and we make an effort to include algorithms being used in different world regions. Each entry in the OASI Register lists the sources of information we consulted about that algorithm. Where the information is not available, we have written “N/A”.

Categories

Because the field of algorithmic systems is constantly and quickly developing, the OASI Register is necessarily a work in progress that will be regularly updated.

At this point, any effort to compile and classify information about algorithms will have to rely on a conventional set of categories: in the OASI Register, we have tried to be as comprehensive as possible regarding the categories while still keeping the list of them and the whole register manageable. These are the categories and definitions we are currently using: 

Algorithm

The name of the algorithm if it has one and we know it, or a short description of what the algorithm does.

Developed by

The name/s of the organisations, companies or other institutions that have developed the algorithm.

Adoption stage

Whether the algorithm is being developed, in use or no longer in use.

Implemented by

The name/s of the organisations, companies, public bodies or other institutions that have implemented or are implementing the algorithm.

Location

The jurisdiction/s where the algorithmic system is being or has been implemented. By “jurisdiction” we mean a city, region, state, transnational body or any other territory over which a legal or normative authority extends.

Implemented since

The date when the algorithm started being implemented if that has been the case.

Implemented until

The date when the algorithm stopped being used if that has been the case.

Domain

The area of society or the economy, or a sphere of activity, where the algorithm is being or has been implemented. We have adapted the list of domains used by the European Commission in its proposed regulatory framework on AI:

  • Infrastructure

  • Policing and security

  • Social services

  • Justice and democratic processes

  • Education and training

  • Labour and employment

  • Communication and media

  • Business and commerce

  • Product safety

Aim

The purpose, intention or desired outcome of the algorithmic system:

  • Compiling personal data: gathering in a systematic or otherwise predetermined way data about individuals and/or groups for publicly known or unknown purposes and based on publicly known or unknown criteria.

  • Evaluating human behaviour: generating assessments of the way in which individuals and/or groups behave based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Recognising facial features: identifying particular facial features in images of people, like the shape of the eyes while a person is smiling, based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Identifying images of faces: matching face images of individual people to face images preregistered in a database based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Predicting human behaviour: generating possible future scenarios in which individuals and/or groups may behave based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Profiling and ranking people: generating profiles of individuals and/or groups and classifying and sorting them based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Simulating human speech: generating speech that closely resembles the way people speak for publicly known or unknown purposes.

  • Recognising images: identifying the content of digital images, for example whether it’s a picture of a cat or of a dog, based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Generating automated translations: translating automatically written text or speech from one language to another/s.

  • Generating online search results: producing a sorted list of websites or other online resources in response to a search query, usually as written or spoken search terms.

  • Recognising sounds: identifying the content of speech or other sounds, for example whether it’s a person speaking or a particular animal or object, based on publicly known or unknown criteria applied to publicly known or unknown data.

  • Automating tasks: carrying out in an automated way a set of tasks that would take a person a much longer time to carry out.

Social impact

The particular ways, fields, issues or areas of social or private life affected by the implementation of the algorithm:

  • Gender discrimination: the algorithm may result in biased outcomes that unjustly and unfairly discriminate among people based on their gender identity, sexual orientation or other related issues.

  • Racial discrimination: the algorithm may result in biased outcomes that unjustly and unfairly discriminate among people based on their race, origin, colour of skin or other related issues.

  • Religious discrimination: the algorithm may result in biased outcomes that unjustly and unfairly discriminate among people based on their faith or religious beliefs, or on related issues.

  • Socioeconomic discrimination: the algorithm may result in biased outcomes that unjustly and unfairly discriminate among people based on their income, educational level or other socioeconomic indexes.

  • Other kinds of discrimination: the algorithm may result in biased outcomes that unjustly and unfairly discriminate among people based on other issues.

  • Social polarisation / radicalisation: the algorithm may result in the production and/or distribution of content that contributes to push individuals and/or groups towards extreme attitudes or behaviour. 

  • State surveillance: the algorithm may contribute to practices of surveillance of individuals or groups by state bodies that haven’t been properly sanctioned or audited, or that aren’t transparent and respectful of people’s rights.

  • Threat to privacy: the algorithm may invade or violate people’s private space or sphere, for example by collecting intimate or otherwise personal data.

  • Generating addiction: the algorithm may contribute to make people addicted to using or relying on particular products or activities in an unhealthy or otherwise harming way.

  • Manipulation / behavioural change: the algorithm may contribute to modify people’s thinking, beliefs or way of behaving or acting without their awareness or in an unhealthy or otherwise harming way.

  • Disseminating misinformation: the algorithm may result in the production and/or distribution of content that’s purposely untrue, wrong, partial or that in other way contributes to make people think or believe something that’s not true.

Has it been audited? 

Whether the algorithmic system has been reviewed by an organisation that is financially, politically and otherwise independent from the organisations, companies or institutions that developed the algorithm.

Jurisprudence

Whether there are court cases that have discussed and passed judgement (or are expected to do so) about how the algorithm was developed, how it works, and/or what kind of impact it has on social and private life.

Links and sources

Links to and references about the available primary and secondary sources of information regarding the algorithm.

Additional notes

 

Entry last updated

The date in which each individual entry in the OASI Register was last updated.

As the field of algorithms develops and as we gather more information about existing and new algorithmic systems, we may modify the OASI Register categories or add new ones. We will explain on this website any modification we do to the categories and definitions in the OASI Register.

Because of how fast the field is developing and the many actors involved worldwide, and while all possible effort has been made to get all the pertinent information about a particular algorithm and to verify it before adding it to the OASI Register, we cannot claim 100% accuracy or to have included every little bit of relevant data about every algorithm.

We envision the OASI Register is a collaborative effort: if you see any mistakes or think there are some data missing, or if you would like to contribute content or to know more about OASI, please get in touch with us (eticas@eticasfoundation.org). You can also submit information about an algorithm yourself through our online form.