Lawmaker or Lawbreaker? How FaceNet Got It Wrong

Would you trust a system whose reputation is tarnished by its biased mistakes? Until it's fixed, neither will we!

11%

of MEPs had matches with a mugshot

5.5/10

mismatches were women

35%

gender misidentification in darker-skinned females

FRT has been, and still is, one of the most debated topics in the European Parliament. Despite the EU AI Act being accepted on March 13th, 2024, these systems are still highly inaccurate, and one error can impact a person’s life forever. According to the current law, FRT will only be banned in specific sectors. After the European Parliament elections of June 2024, new MEPs will be in office, and with them new opportunities and challenges. We hope that this is a moment to advance technology aligned with the protection of people and that the newly elected MEPs understand that AI laws are yet to be perfected.

Meet the MEPs! Wait... those are not the ones.

We tested the reliability of FaceNet, a facial recognition algorithm tool developed by Florian Schroff, Dmitry Kalenichenko, and James Philbin (Schroff, et.al., 2015). In our test, we compared the MEPs public photos with publicly available wanted people photos by Interpol. The results showed mistakenly matched MEPs from various political parties, both men and women legislators from different European countries. The inaccuracies showcased an important challenge: gender disparity.

In this case, women were more frequently misidentified than men. We used a parameter of 0.8 to determine the similarity between the images and then, we found out that 12 of the MEPs (11%) had matches with a mugshot. Even more, women had more matches than men: there were 7 women (58%) with matches and 5 men (42%). This finding is crucial given the actual gender distribution of MEPs images with 48 women (44%) and 60 men (56%), meaning, there is an inverted ratio. However, this is not the first study casting doubts on facial recognition technology's risks or highlighting its gender inaccuracies. For example, MIT’s Joy Buolamwini has found out that in a sample of over 200 pictures the gender misidentification rose up to 7% in lighter-skinned females and 35% in darker-skinned females (Lohr 2018).

01

Mistaken-matched identities have serious real-world implications, and FaceNet is part of the problem. The lack of reliability of the FRT results often reproduce and reinforce existing societal biases, particularly gender and racial biases. In cases like this, defective AI is not only bad technology, it's a genuine danger. Matching individuals with arrest photos is not merely a theoretical exercise. Some FRTs, like Amazon’s actively market its face surveillance technology to law enforcement, claiming the ability to identify up to 100 faces in a single image, track individuals in real time through surveillance cameras, and analyze footage from body cameras.

Both tests done with FaceNet and Amazon Rekognition algorithms expose the vulnerabilities of FRT and the potential impact on the law enforcement sector. In an open letter, AI researchers argue that studies consistently reveal flaws in FR's algorithms, particularly higher error rates for darker-skinned and female faces. Concerns include the potential for racial discrimination, cases of mistaken identity, and intrusive surveillance if police adopt such technology. The letter emphasizes the reinforcement of human biases by flawed facial analysis technologies, it follows increasing protests and calls for regulation within the tech industry.

02

The European Union (EU) has implemented regulations about FRTs through the Charter of Fundamental Rights, the General Data Protection Regulation, the Law Enforcement Directive, and the EU framework on non-discrimination. These regulations also extend to processes and activities involving FRT. However, there are debates regarding the effectiveness of the current EU framework in adequately addressing fundamental rights concerns associated with FRT. Despite attempts by Courts to fill gaps in protection through extensive interpretation of existing legal frameworks, uncertainties and complexities linger.

On March 13th, 2024, the European Parliament formally adopted the EU Artificial Intelligence Act (“AI Act”) with a large majority of 523 votes in favor. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact. Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:

- Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race).

- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

- Emotional recognition in the workplace and educational institutions.

- Social scoring is based on social behavior or personal characteristics.

03

Our findings highlight the urgent need to perfect regulations on FRT. While the EU AI Act can take some steps to protect citizens, it falls short if the technology is faulty and reproduces harmful biases. As we look into the future, we hope that the new MEPs use their role as policymakers to understand that FRT is not a distant risk, but one that can affect everyone equally, even them. FRT like FaceNet need to be scrutinized to protect individuals.

Facial recognition systems have shockingly bad accuracy, with mistaken identities occurring constantly when the person is not white. These chilling results highlight a significant lack of trust in how governments could use sensitive data. The high potential for misuse could result in arbitrary endangerment, data exposure, and invasive surveillance of citizens, reproducing bias and unfair treatment of individuals. The message is clear: the possible negative consequences of FRTs in terms of privacy and civil liberties are significant.

The European Parliament must address these concerns promptly, pause any implementation, and establish a moratorium on law enforcement use of facial recognition. The technology should not be employed until all potential harms are thoroughly considered, and necessary precautions are taken to prevent adverse impacts on vulnerable communities. While facial technology is not banned, it will be listed as one of the high-risk AI use cases in Annex III and is therefore subject to the high-risk requirements. FR's technology needs to be audited to avoid falling into errors like the ones we have found with these images.

04

Marco Zanni- Italy (Identity and Democracy Group)

Jadwiga Wiśniewska-Poland (European Conservatives and Reformists Group)

Idoia Villanueva Ruiz-Spain (GUE/NGL)

Adrián Vázquez Lázara- Spain (Renew Europe)

Marion Walsmann-Germany (EPP- Christian Democrats)

Karolin Braunsberger-Reinhold-Germany (PPE)

Salima Yenbou-France (Renew Europe)

Leïla Chaibi-France (GUE/NGL)

Manon Aubry-France (GUE/NGL)

István Ujhelyi-Hungary (S&D)

Tamás Deutsch-Hungary (Non-attached Members)

Eugen Tomac-Romania (EPP- Christian Democrats)

Let’s work together to build a present where AI is

Fair, Auditable and Safe for All.

Stay Informed, Stay Ahead

Sign up for our newsletter to receive updates on AI accountability, our latest projects, and how you can make an impact. Your details will be securely stored, and we’ll reach out as soon as we’re ready to share exclusive insights.