Automating (in)justice?: an adversarial audit of RisCanvi

A tool designed to assess inmates’ risk of recidivism should be robust and reliable. This adversarial audit, the first done of an AI system used in the criminal justice system in Europe, makes some shocking discoveries.
Lawmaker or Lawbreaker? How FaceNet Got It Wrong

FaceNet’s errors reveal AI’s potential for misidentification, highlighting cases where even prominent figures were incorrectly flagged. This article discusses the implications for privacy and security in facial recognition technology
Name Your Bias: AI’s Fairness Challenge in Hiring

Exploring AI’s role in hiring, this article delves into bias challenges within automated recruitment tools and the impact on fair hiring practices