Name Your Bias: AI’s Fairness Challenge in Hiring

Exploring AI's role in hiring, this article delves into bias challenges within automated recruitment tools and the impact on fair hiring practices.

4

identical resumes

1

AI chatbot- ChatGPT

1.8B

billion visits by April 2023

In 2004, a study published in the American Economic Review exposed that job applicants with White-sounding names received more callbacks than those with Black-sounding names (Bertrand and Mullainathan 2004). Fast forward twenty years, and this bias is not only around, now it has gone digital.

This past April and May, we were wondering if generative AI systems, like ChatGPT, would carry these same biases when evaluating resumes. Could this be? Did AI show favoritism based on names of candidates? We wanted to know if AI is now mirroring the biases we have seen in society for more than just 20 years. For this, we created four identical resumes, with the same qualifications, experience and skills. The only difference were the names and places of birth. Then, we asked ChatGPT to rank these candidates from best to worst.

In our initial test in April, ChatGPT ranked the candidates showing that some were preferred over others. Let’s highlight that: even though the experience was identical, some candidates were ranked higher. There was no clear reason for this prioritization, or disclaimer to double check the rank since they were equally suitable for the role. This lack of explanation raised up all the red flags that there was an implicit bias in the AI decision making process.

By May, we saw a positive shift. ChatGPT acknowledged that all candidates had the same backgrounds, experiences, and qualifications. It even included a recommendation noting that decisions should consider soft skills, specific achievements, or culture fit. This change showed us that the system had evolved to recognize and address possible bias.

Check it out!

01

The implications of our findings are not small. In a world where AI makes more and more important decisions, having biased systems can lead to big problems. If the biased results from April were automated in real-world hiring, qualified candidates could be unfairly excluded, keeping inequalities alive and unseen. There is a real risk that AI will weave this unfairness into their systems and beyond, it would fortify the resume bias based on names or implications of ethnicity (Adamovic, n.d.). The good news is that the changes we saw in May show that AI systems CAN learn, adapt and improve. But this also raises questions about how consistent and reliable they are, particularly how often they are being reviewed. With ChatGPT getting 1.8 billion visits by April 2023 (one year before our experiment) the potential for spreading biased decisions is massive (Mortensen 2024). All of these are reasons why we are insisting on our end to innovate WITH accountability.

02

Trick question, of course it does! We all have biases whether related to gender, race, weight, name, accent, or age. Recognizing and addressing these biases is crucial for fair and equitable decisions. More than 30 years ago research found that trying not to think about our biases can make them more pronounced, affecting our behavior even more (Wegner et al. 2004). This means that acknowledging and working on biases is better than just trying to ignore them –or not giving a second thought when technology shows them. However, we all also have something called the bias blind spot which means that we often don’t realize our own biases, which can influence our decisions. This is why transparency and accountability in AI is so important. This way we catch those errors even before they influence others.

03

It is reported the AI recruitment is only going to grow. As companies rely more on AI, we need to make sure that these systems do not perpetuate existing biases. Auditing (like ours) is a must for keeping things fair. In today’s rough job market, we need systems that truly celebrate diversity and eliminate bias. Many of us have faced subtle discrimination, like having our names flagged by spell-checkers, subtly implying that they are wrong. Others have experienced the need to change our names to improve our chances of getting called back or diminish the chance of people “spotting” us. This simply needs to stop.

04

We are dedicated to auditing the AI systems, to make sure that bias is not one of its inherent characteristics. We’ll keep pushing for unbiased hiring practices and the responsible use of AI.

Sincerely,
Eticas Foundation Team 😉

Let’s work together to build a present where AI is

Fair, Auditable and Safe for All.

Stay Informed, Stay Ahead

Sign up for our newsletter to receive updates on AI accountability, our latest projects, and how you can make an impact. Your details will be securely stored, and we’ll reach out as soon as we’re ready to share exclusive insights.