Amazon awards grant to UI researchers to decrease discrimination in AI algorithms

A team of University of Iowa researchers received $800,000 from Amazon and the National Science Foundation to limit the discriminatory effects of machine learning algorithms.

University+of+Iowa+researcher+Tianbao+Yang+seats+at+his+desk+where+he+works+on+AI+research+on+Friday%2C+Aril+8%2C+2022.

Larry Phan

University of Iowa researcher Tianbao Yang seats at his desk where he works on AI research on Friday, Aril 8, 2022.

Arabia Parkey, News Reporter


University of Iowa researchers are examining discriminative qualities of artificial intelligence and machine learning models, which are likely to be unfair against one’s race, gender, or other characteristics based on patterns of data.

A University of Iowa research team received an $800,000 grant funded jointly by the National Science Foundation and Amazon to decrease the possibility of discrimination through machine learning algorithms.

The three-year grant is split between the UI and Louisiana State University.

According to Microsoft, machine learning models are files trained to recognize specific types of patterns.

Qihang Lin, a UI associate professor in the department of business analytics and grant co-investigator, said his team wants to make machine learning models fairer without sacrificing an algorithm’s accuracy.

RELATED: UI professor uses machine learning to indicate a body shape-income relationship

“People nowadays in [the] academic field ladder, if you want to enforce fairness in your machine learning outcome, you have to sacrifice the accuracy,” Lin said. “We somehow agree with that, but we want to come up with an approach that [does] trade-off more efficiently.”

Lin said discrimination created by machine learning algorithms is seen disproportionately predicting rates of recidivism — a convicted criminal’s tendency to re-offend — for different social groups.

“For instance, let’s say we look at in U.S. courts, they use a software to predict what is the chance of recidivism of a convicted criminal and they realize that that software, that tool they use, is biased because they predicted a higher risk of recidivism of African Americans compared to their actual risk of recidivism,” Lin said.

Tianbao Yang, a UI associate professor of computer science and grant principal investigator, said the team proposed a collaboration with Netflix to encourage fairness in the process of recommending shows or films to users.

“Here we also want to be fair in terms of, for example, users’ gender, users’ race, we want to be fair,” Yang said. “We’re also collaborating with them to use our developed solutions.”

Another instance of machine learning algorithm unfairness comes in determining what neighborhoods to allocate medical resources, Lin said.

RELATED: UI College of Engineering uses artificial-intelligence to solve problems across campus

In this process, Lin said the ‘health’ of a neighborhood is determined by examining household spending on medical expenses. ‘Healthy’ neighborhoods are allocated more resources, creating a bias against lower income neighborhoods that may spend less on medical resources, Lin said.

“There’s a bad cycle that kind of reinforces the knowledge the machines mistakenly have about the relationship between the income, medical expense in the house, and the health,” Lin said.

Yao Yao, UI third-year doctoral candidate in the department of mathematics, is conducting various experiments for the research team.

She said the importance of the group’s focus is that they are researching more than simply reducing errors in machine learning algorithm predictions.

“Previously, people only focus on how to minimize the error but most time we know that the machine learning, the AI will cause some discrimination,” Yao said. “So, it’s very important because we focus on fairness.”