Home / Humanity / Cultures / How in the World Could Artificial Intelligence Be Racist in Today’s Environment?

How in the World Could Artificial Intelligence Be Racist in Today’s Environment?

artificial intelligenceMost of us have assumed that today’s robots would have an artificial intelligence which would be quiet cold and calculated and completely objective in nature. However, it seems that several of human’s imperfections have also been discovered on our machines as well. As it turns out, many of these A.I. and machine-learning creations have developed blind spots as it pertains to both minorities and women.

How in God’s name has this happened in today’s age of tolerance and inclusion? This is particularly disturbing, when considering that many of these companies, several governmental organizations, and hospitals are now employing A.I. and other machine learning tools to perform actions from treating and preventing diseases and injuries to forecasting creditworthiness for those applying for loans.

These gender and racial biases have taken place in several ways. Recently, Beauty.AI was determined to remain totally objective in the judging of a prominent international beauty contest. Their AI used factors like facial symmetry and so forth. With this algorithm, Beauty.AI evaluated about 6,000 photos that has been sent in from more than 100 countries to identify the most beautiful people. From the 44 winners, almost all of them were white, a few were Asian, and just one winner actually had dark skin. This occurred in spite of the fact that there were quite a few submissions from people of color, which included very large groups from both Africa and India. What was even worse than that was in the year 2015, they discovered that the photo software from Google had tagged two of the black users as “gorillas.” It is believed that this was because of a lack of example profiles representing people of color was in the reference database.

Obviously, the primary reason for this problem stemmed from the AI’s reliance on its database. Even if the data was accurate, it could still result in stereotyping. For instance, a machine could incorrectly consider a nurse as a female, simply because the database would indicate that not as many men become nurses. In yet another example, researchers used a dataset that included white and brown cats along with black dogs. Given the data provided, the algorithm incorrectly tagged a white dog to be a cat. In other instances, the algorithm could be trained by those who are using it, which could result in the machine adopting the biases from these human users.

In the year 2016, researchers tried to weed out all the gender biases from their machine learning algorithm. In a paper entitled “Man is to Computer Programmer as Woman is to Homemaker?” the researchers sought to sort out all legitimate correlations from those that were biased. A legitimate correlation could look like “man is to king as woman is to queen,” while one that is biased one might be “man is to doctor as woman is to nurse.” By “using crowd-worker evaluation as well as standard benchmarks, [the researchers] empirically demonstrate that [their] algorithms significantly reduce gender bias in embeddings while preserving its [sic] useful properties such as the ability to cluster related concepts and to solve analogy tasks,” the study concluded. Today, these very same researchers are now applying this procedure to get rid of any racial biases.

Adam Kalai, who is a Microsoft researcher that co-authored this paper, stated that “we have to teach our algorithms which are good associations and which are bad the same way we teach our kids.”

Researchers also suggest that the use of different algorithms for classifying two groups that are represented in a particular set of data, instead of using the same measurement for everyone could greatly help reduce these biases in artificial intelligence.

Check Also

is our creativity evolving

Is Our Creativity Evolving?

In the last few decades, we have been made aware that human knowledge is growing …

Leave a Reply