Identifying the Risks of AI
Irresponsible AI is leaving people out of the new data possibilities, denying opportunities to large population groups, creating risks and eroding trust in innovation.
AI regulation around the globe will soon require increased transparency, accountability, and a risk-mitigating approach to AI development.
Adversarial audits are the first step in identifying risks, prompting deeper research, and opening up conversations about the impacts of AI.
An AI Audit is a crucial practice for inspecting and evaluating entire AI systems within their specific contexts. An AI system includes multiple algorithms or models, and depending on the scope, audits can assess one or more algorithms within it.
The systematic yet agile approach and methodology of adversarial algorithmic auditing enable dynamic assessment of regulations, standards, and impact, ensuring both rigor and versatility in adapting to various AI systems. We are committed to advancing adversarial audits as a crucial tool for reconciling innovation potential with societal impact. These audits offer a means to evaluate systems that are typically out of reach, providing transparent oversight where it’s needed most.
When audit results are made public, they increase transparency and accountability.
Understanding the algorithm and its operational environment through stakeholder mapping and contextual analysis.
Assessing biases, inefficiencies, and anomalies using data analysis, network interactions, and impact evaluations.
Reverse engineering and replicating system processes through research of affected parties, enabling a comprehensive assessment of algorithmic impacts.
Adversarial audits follow a socio-technical approach, recognizing the interplay between algorithmic processes and social dynamics. They offer a robust framework for evaluating AI systems, ensuring transparency and accountability in algorithmic decision-making.
Want to know more about adversarial audits?
Take a look at our published Adversarial Algorithmic Auditing Guide.
An in-depth approach to the impact of social media on the representation and voice of migrants and refugees in Europe and the challenges and opportunities to promote fair representation.
An investigation on VioGén, an algorithm that determines the level of risk faced by a victim of gender-based violence and establishes her protection measures in Spain.
This audit focuses on ride-hailing platforms in Spain and examines how they challenge regulation. It aims to uncover the harms of their algorithms: competitive distortions, labor issues, and geographic pricing bias.
A study on the intersection of facial recognition technology and disability, shedding light on potential biases and challenges.
An in-depth approach to the impact of social media on the representation and voice of migrants and refugees in Europe and the challenges and opportunities to promote fair representation.
The first adversarial audit of an AI criminal justice system in Europe: the RisCanvi tool. Designed to assess inmates’ risk of recidivism, it has been in use in Catalonia, Spain, since 2009, influencing parole and sentencing decisions.
Understanding the context and impact of the systems aimed for analysis is crucial. This requires actively listening to the voices of those directly affected. This principle guides the process of reverse engineering, unraveling biases, inefficiencies, and anomalies within AI systems.
Engagement with individuals and communities affected by AI technologies yields valuable insights into their experiences, concerns, and needs. This firsthand knowledge directs audit efforts towards addressing real-world challenges, ensuring analyses are grounded in the lived realities of those affected.
In essence, listening to the voices of the people empowers the reverse engineering of AI systems with empathy, integrity, and a profound commitment to fostering fairness, equity, and accountability in algorithmic decision-making.
For the Eticas Foundation, it’s not just about understanding technology; it’s about understanding its human impact.
At Eticas Foundation, dedication to conducting rigorous adversarial audits that illuminate the social impact of AI systems is a priority. However, this effort cannot be accomplished alone. Collaborative partners-organizations dedicated to social justice, human rights, and responsible technology-are essential to join forces in these critical audits.
Whether you are a civil society organization, research institution, advocacy group, or community representative, your insights, expertise, or data on AI systems are invaluable. If you represent a community impacted by an algorithm or system and lack technical understanding, please don’t hesitate to contact us.
By pooling collective knowledge and resources, collaborative efforts can scrutinize and assess AI systems, pinpoint areas for improvement, and advocate for substantive change. Let’s make a difference in ensuring that AI technologies serve the greater good and uphold the values of equity, justice, and transparency.
Discover how to contribute and become part of the adversarial auditing movement.