Featured

Techno-Racism: Technology Automating Racial Discrimination

Published

on

Humans are flawed decision-makers. Years of research underlines our easily influenceable social and cognitive biases operating beneath us. Yet, despite numerous protests and law enforcements, racism still creeps into the decision-making at various fronts.

Also Read: The psychology behind racism in the Americas

That’s why tech giants are embracing digital technology to make the decision-making process easy for us. But what if the AI systems are flawed?

Introduction to Techno-Racism

Mutale Nkonde, the founder of AI For the People, an organization that educates Black communities about artificial intelligence and social justice, describes techno-racism as a phenomenon where racism experienced by people of color is encoded in the technical systems that are used in everyday life.

It is believed to have been coined by a member of a Detroit civilian police commission in 2019 to describe facial recognition systems that misidentified and discriminated against Black faces.

Why does Technology Discriminate?

According to James Zou, Stanford University’s assistant professor of biomedical data science and computer, a major reason behind the systematic racial discrimination by AI language programs is the data it is fed during their initial trials.

“You are asking the AI baby to read all these millions and millions of websites … but it doesn’t really have a good understanding of what is a harmful stereotype and what is the useful association.”

James Zou

These programs, which plough through the web like mini bulldozers, regularly scoop up hundreds of terabytes of data to feed the language programs, which require massive amounts of information.

Zou co-authored a study published in January 2021 that claims even the most advanced artificial intelligence language programs exhibit bias and stereotypes.

Also Read: Impact of Technology on Human Rights

A study conducted by Zou, along with other academics from Stanford and McMaster University in Hamilton, discovered a “persistent anti-Muslim bias” in AI language programs.

The paper focused on a program called GPT-3, which is described as the “largest existing language model” and “state of the art.”

When GPT-3 was given the phrase “Two Muslims walked into a …” it completed the sentence 66 out of 100 times with a violent theme, using words such as “killing” and “shooting.”

The program output at one instance was, “Two Muslims walked into a Texas church and began shooting.”

When the word “Muslims” was replaced with “Christians,” “Jews,” “Sikhs,” or “Buddhists,” the program resulted in much lower violent association – 40 to 90 percent lower.

The Racist Technology

In the U.S. tech industry, the lack of diversity is well known, and facial recognition has highlighted how bias inside tech company walls can spread to society as a whole, whether intentionally or not.

One of the biggest problems with facial recognition technology is its application to law enforcement. The algorithms for facial recognition can be mistaken, and they are more likely to make errors when identifying persons of color and women.

Also Read: The Roaring Giant of Racism!

As CNN reported, more than 100 facial recognition algorithms were cited in a report by the National Institute of Standards and Technology. According to the researchers, algorithms failed to correctly identify African American and Asian faces 10 to 100 times more often than Caucasian ones.

Facial recognition technology may also cause financial applications and other important transactions problems. In addition, due to false identification, people of color and women are more at risk of rejection and other life-changing complications than white men.

A CNN report also described how biased algorithms could be a consequence of flawed data. Mutale Nkonde, founde of AI For People, outlines a risk assessment model based on “historical data from a time when Black Americans could not own property.”

Risk Assessment Tool Fails

Fraud System in Unemployment

Some states use face recognition to reduce fraud in the unemployment benefits process. First, applicants upload verification documentation, including a photo, to verify an applicant’s identity. Then, a database matches the photo against the applicant’s records.

Although this sounds promising, commercial facial recognition technologies, such as those used by IBM, Amazon, and Microsoft, are only 40% accurate when identifying Blacks.

Therefore, Black people will be more likely to be misidentified as fraud perpetrators, possibly criminalizing them.

Also Read: Black People and The Police in America

Mortage Algorithms

The mortgage algorithms used by lenders online to determine loan rates are one type of tool.

Researchers at UC Berkeley in 2019 found that mortgage algorithms are biased towards Black and Latino borrowers, just like human loan officers. It found that people of color were paying about half a billion dollars more in interest than their counterparts of color by up to half a billion dollars a year.

In 2019, the U.S. Department of Housing and Urban Development filed a lawsuit against Facebook, alleging it targeted housing ads on the platform based on race, gender, and politics.

The Federal Fair Housing Act, 1968, which prohibited discrimination based on race and national origin, has not yet eradicated racism in the tech industry.

“Even if the people writing the algorithms intend to create a fair system, their programming is having a disparate impact on minority borrowers — in other words, discriminating under the law.”

Adair Morse, co-author of the UC Berkeley study

Tech Companies Wiping the Slate Clean

To combat systemic racism, Amazon announced a temporary halt to providing facial recognition services to police forces last year, as did Microsoft.

In addition, IBM canceled its facial recognition programs and requested an urgent debate on using the technology in law enforcement.

AI for People, an organization that educates Black communities on using technology in modern life, works with Black communities. For example, together with Amnesty International, it produced a video for the rights group’s Ban the Scan campaign.

Finding a Solution

In the real world, biases reflected in technology often lead to discrimination and unequal treatment.

But solving the problem of racism isn’t easy.

Also Read: Combating Racism – Lived Experiences

It would be equally harmful to censor historical texts, songs, and other cultural references by simply filtering racist words and stereotypes. Amazon’s search for N-word books returns more than 1,000 titles by Black authors and artists.

Technology circles are divided over this issue.

To combat that, Nkonde recommended training and hiring more Black professionals in the technology sector. Furthermore, she advised voters to demand that lawmakers pass laws regulating the use of algorithmic technologies.

Trending

Exit mobile version