Artificial intelligence, or AI, can set sentences for criminal offenders. It can decide whether or not you can buy a car. It also can also advertise to you, a customer. By “can,” it is not meant that it has such a capability; rather, it is allowed to do so.
The decisions AI models make are implicated clearly in modern, everyday life. The models’ ability to process large amounts of data makes them almost incontestable. More and more these models work their way into helping with efficient decision making.
These models are actively learning, meaning they interpret input data and react accordingly. Generative AI, especially, is measured by its output. When considering bias as part of these decision-making models, researchers are keen on making changes. However, there seems to be one problem with these models worrying researchers: bias.
This was realized by two research collaborators, one at the University of Iowa’s Tippie College of Business and the other currently at Texas A&M observed biases in AI models and have been working on analyzing these biases for a few years.
The way that AI models understand data is through a concept known as machine learning. This refers to the use of algorithms by AI to process and interpret data, eventually reflecting its determined results.
Recently, the researchers applied for a grant from the National Science Foundation to fund research on models used to make loan decisions and disperse online discounts, among other applications. They found the models made decisions unfavorable between not only male and female groups, but with different racial, ethnic, and age groups as well.
Qihang Lin, a research fellow and associate professor at Tippie, and Tianbao Yang, an associate professor at Texas A&M, earned the $800,000 grant in 2022 and recently published their findings in two different papers, one in 2023 and the second currently under peer-review.
Lin said machine learning models typically begin by interpreting data with a ranking. The models test data variables to see who or what is most likely to do something.
In the case of business analytics, it will test data to see what person, identified by a set of characteristics, is most likely and, subsequently, most valuable, to send an online discount.
AI can also be used to decide whether or not loan and mortgage applicants can qualify. In many cases, denial means no sale. The project includes Amazon from a business perspective to generate improved customer experiences using AI models.
Lin said these models will have a bias toward individuals who are a part of groups with overall better statistical ranking, according only to the model. Discounts are one thing, though; more important applications like loan and mortgage decisions can be harmed by bias in the model.
He said the models they worked with initially had a 50 percent stress point, which means individuals who were ranked above a 50 percent likelihood of buying a product would receive a discount.
Adjusting these models to 20 percent or 70 percent — essentially cutting the set into quartiles — would cause problems. These kinds of stress models were problematic as only one group, such as males, would be represented in one part of the model, meaning only one group would be eligible, thus demonstrating bias.
Lin said the immediate solution to these problems with list ranking was to create a different list and blend of people so that the model would inherently make more fair decisions because the input was more evenly dispersed.
“Just make sure the entire list has a good blend, a good mix of the different people at a different position, different location, in the ranking,” he said.
He said the process of training the model consists of making small adjustments. However, he said that reaching a certain point of unbiased fairness in the model can become detrimental.
“If you really want to make sure it’s fair, then the product may be sent to people who are not interested in the discount,” Lin said.
He said their work with the National Science Foundation ranged from discounts to insurance — primarily health insurance for the elderly.
Yang, the associate professor at Texas A&M, said his work with Lin explored other ways to categorize individuals in the data set, also. He said that he and Lin, as researchers, must learn alongside the model to produce the desired results.
RELATED: College of Pharmacy dean receives national funding for ovarian cancer research
“This algorithm is like an improvement of the existing algorithm,” Yang said. “So in this learning process, we not only account for the accuracy of the model, but at the same time, we pay attention to the fairness of the model.”
He also said that in some cases, models need to be evaluated on a case-by-case basis. Fairness in the numbers is not always foolproof. Yang provided an example of a natural disaster and its effects on wealthy and impoverished areas.
He said a model may choose the higher-income areas as needing more government relief because they technically lost more due to higher property values overall.
“The resource can be allocated to different groups of people, not just the people who have high damage because their household is high value,” he said. “But also consider people in the poor area.”
The research Lin and Yang are doing is a part of a much larger world of AI applications; a world which few know is growing by the day.
Tippie College of Business Assistant Professor and Researcher Thiago Serra said the rise in AI applications is largely opportunistic.
“There is a lot of data,” he said. “This is coming from computers becoming as available as they are in the past 20 to 30 years.”
He believes temperance and care need to be taken to ensure these models function properly and continue to be used.
“If we understand how can we use the data in a meaningful way, and if we find out that people with certain traits are more likely to commit certain crimes, it doesn’t mean that they will,” he said.
Additionally, he said that research and maintenance are important to keep AI models relevant and useful.
“We [the world] had two moments called AI winters where we had ideas about what we could do with AI that didn’t materialize, so there was no more funding for AI,” Serra said. “If we place our expectations too far ahead of what could be done, suddenly businesses will back away, funding agencies will back away.”