Algorithmic bias poses tough ethical questions for AI

It is 2014 and a young woman has just applied for a new job as an Amazon software engineer. As part of a new initiative to automate hiring at the company, her application is reviewed by an artificial intelligence algorithm, instead of a human panel. Later, she finally receives the decision: she’s been rejected. The rejection in itself may not seem out of the ordinary, but upon taking a closer look at exactly why the algorithm made this decision, a few key features become apparent: the algorithm downgraded applicants who graduated from a women’s college and gave lower ratings to resumes with the word “women’s” (such as in “women’s soccer team”). The algorithm even picked out differences between the verbage used by male and female applications, preferring applications that included words like “executed” or “captured” which are more commonly used by men. With these results in mind, it’d seem like the algorithm rejected this female applicant largely because she was a woman. Surely not, some may think: isn’t artificial intelligence supposed to be objective?

Wrong. While artificial intelligence has long been heralded by some as a golden panacea for discrimination, its reputation for being impartial and neutral is misconceived. The same bias and unfairness that has riddled human society for centuries still pervades into the digital world. Just like humans, artificial intelligence can adopt bias and make unfair, discriminatory decisions. And just like human bias, artificial intelligence bias, also known as algorithmic bias or machine learning bias, can trigger serious consequences.

Although Amazon quickly caught onto the systematic discrimination of the hiring algorithm discussed above and discontinued it after just a year, that may not always be the case. When left unchecked, biased AI algorithms can wreak havoc and amplify inequalities, especially due to the growing influence of AI in society. 

The fact remains that most of today’s AI algorithms are not effectively designed to cater to all demographics. In fact, many artificial intelligence algorithms have been evaluated to be significantly less accurate when applied to women or people of color.

Consider the experiences of Joy Buolamwini, a Ghanaian-American-Canadian computer scientist. During her time at MIT, Buolamwini was working on a project using facial analysis technology when she soon noticed that the computer was consistently unable to recognize her face. Frustrated and attempting to identify the problem, Buolamwini put on a white mask to test the algorithm, revealing it was able to detect the mask as a face, but not Buolamwini’s own.

A later study conducted by Buolamwini and Timni Gebru evaluated three facial analysis algorithms designed for gender classification and found them to have error rates as high as 34.7% for dark-skinned females. On the other hand, the maximum error rate for classifying lighter-skinned males was just 0.8%. In other experiments, Buolamwini found that facial analysis algorithms falsely classified prominent Black women, from Serena Williams to Sojourner Truth to Michelle Obama, as men. 

Yet Buolamwini’s studies are just one example of the countless inequalities seen in today’s artificial intelligence algorithms. The underlying biases of today’s technology find their way to places like hospitals and courtrooms as well, where even just one decision could mean the difference between life and death. 

In 2019, a study investigating the applications of a widely used artificial intelligence algorithm in healthcare found striking signs of racial bias. The study found that Black patients often had to be deemed much sicker than white patients to be assigned the same level of care by the algorithm. Essentially, the computer was less likely to provide Black patients with the needed extent of care as it was white patients. Without significant awareness or oversight, artificial intelligence will exacerbate already significant inequalities for marginalized communities in healthcare. 

Another area in which artificial intelligence has been implemented is today’s legal system, where lawyers and officials are using it to assist in drafting contracts, reviewing documents, and even recommending judicial decisions. Yet, similar to in healthcare and corporate settings, algorithmic bias also is found in the courtroom. A ProPublica study done in the courts of Broward County, Florida, revealed that an AI algorithm predicted that African-American defendants were at “high risk” of reoffending at twice the rate of white defendants. From the courtroom to hospitals, employing algorithms that are not fully inclusive of all identities at scale and in high-risk environments can manifest into dangerous and even deadly consequences. 

The present advancements of artificial intelligence are a double-edged sword, endowed with the potential both to stimulate major societal improvements and to give rise to critical repercussions. Because of its dependency on data, artificial intelligence has a natural tendency to adopt the prejudices of society. It is crucial to employ AI with an awareness of its risks as well as conscious efforts to enforce fairness. AI is already revolutionizing the world, but to fully harness its potential for social good, we must revolutionize AI itself.