In 2009, during the fallout of the financial crisis, American Express implemented algorithms to identify the spending patterns of customers who had trouble paying their bills and reduce the financial risk these customers posed to the company by limiting their credit score (O’Neil 2018). As a result, shopping in certain establishments could lower a person’s credit score. However, often these specific establishments were those frequented by persons who already possessed financial difficulties, thus persons in lower-income brackets were discriminated against by the algorithm.
Those affected saw their credit scores go down and, therefore, their borrowing costs go up. This, in combination with a limited ability to access credit, created a dire financial situation for many people. The algorithm effectively punished those who were already struggling with yet more financial problems (Lieber 2009).
This case raises important questions regarding privacy and data protection rights. American Express only created and profited from the precarious correlations relating spending habits and creditworthiness due to their access to enormous volumes of personal data and their opaque automatic data processing and analytics. Moreover, the company placed people in financial jeopardy without due diligence nor respect for their digital rights. American Express never published an explanation on how or why their algorithm made the decision to include the credit card holder into a certain category. While the company abandoned the practice amid public backlash, algorithmic threats to privacy, financial security, and equality continue to manifest across the world (ibid).