Better AI  For All

Eticas Foundation protects people and the environment in AI processes, to build a world where tech is fair, auditable and safe to use. For all. 

Let’s play!

Engineers!

For the week’s challenge, you must develop an AI system to .

Decide which woman victim of domestic violence should get protection.

Adversarial Audit of the VioGén System

Go →

Predict the risk of an inmate committing crimes again in the future.

Automating (In)Justice? An adversarial audit of RisCanvi

Go →

Build a content recommender system that always shows migrants as a threat.

Auditing Social Media

Portrayal of Migrants on YouTube

(In)Visibility of Political Content on Migration on TikTok

Go →

Determine the cost of insurance based on people’s faces and expressions.

Invisible No More: The Impact of Facial Recognition on People with Disabilities

Adversarial audit of Zurich’s Azul insurance system and other commercial FR models

Go →

Build a ride-hailing app that charges people more if they are poor.

Adversarial Audit of Ride-Hailing Platforms

Algorithmic compliance with competition, labor and consumer law in Spain

Go →

Initiatives

  • Adversarial AI Audits

    Eticas' pioneering reverse-engineering approach to opening the AI black-box. We work with communities impacted by AI to build their technical capacities and provide them with tools to measure and expose AI impacts, seek redress and participate in AI policy and decision-making. 

  • Data Pollution

    If we are taking cars off the road due to their environmental impact, should we consider banning AI processes that don't justify their climate footprint? We work to add the environmental costs of data and AI processes in AI auditing and impact measurement. 

  • AI Policy

    Our hands-on experience and socio-technical approach to AI auditing have allowed us to define the metrics and benchmarks needed to inform AI policy. Our data and experience empower policymakers to draft better, enforceable regulation and oversight mechanisms for AI solutions. 

  • Strategic Litigation

    AI auditing measures bias dynamics which can then be used by civil society to expose discrimination and seek redress. Through the campaign Action for Algorithmic Justice (AxJA) we challenge the lack of transparency and accountability in the tech industry and help impacted communities assert their rights. 

Recent Publications

Our Collaborators