Teaching AI What Is Fair | EurêkAlert! Scientific news
“What is fair?” Sounds like a rhetorical question. But for Michigan State University’s Pang-Ning Tan, it’s a question that demands an answer as artificial intelligence systems play an increasing role in deciding who gets proper healthcare, a bank loan, or a job.
With funding from Amazon and the National Science Foundation, Tan has been working for a year to teach artificial intelligence algorithms how to be fairer and recognize when they are unfair.
“We are trying to design AI systems that are not just for IT, but also bring value and benefit to society. So I started to think about the areas that are really difficult for the company right now, ”said Tan, a professor in the computer science and engineering department at MSU.
“Fairness is a really big deal, especially as we become more and more dependent on AI for everyday needs, like healthcare, but also for things that seem mundane, like filtering spamming or inserting articles in your news feed. ”
As Tan mentioned, people already trust AI in various applications and the consequences of unfair algorithms can be profound.
For example, surveys have found that AI systems have made it more difficult for black patients to access healthcare resources. And Amazon removed an AI recruiting tool that penalized female applicants in favor of men.
Tan’s research team faces such problems on several fronts. The Spartans are examining how people use data to teach their algorithms. They are also exploring ways to give algorithms access to more diverse information when making decisions and making recommendations. And their work with NSF and Amazon attempts to broaden how fairness has generally been defined for AI systems.
A conventional definition would look at fairness from an individual’s point of view; that is, whether a person would consider a particular outcome to be fair or unfair. It’s a sensible start, but it also opens the door to conflicting, even contradictory definitions, Tan said. What is right for one person may be unfair for another.
Tan and his research team therefore borrow ideas from the social sciences to construct a definition that includes the perspectives of groups of people.
“We’re trying to educate AI about fairness and to do that you have to tell them what’s right. But how do you design a fairness measure that’s acceptable to everyone,” Tan said. “We are examining how a decision affects not only individuals, but also their communities and social circles.”
Consider this simple example: three friends with identical credit scores apply for loans of the same amount from the same bank. If the bank approves or denies everyone, friends would perceive this to be fairer than a case where only one person is approved or denied. This could indicate that the bank used external factors that friends might deem unfair.
Tan’s team is devising a way to essentially score or quantify the fairness of different outcomes so that AI algorithms can identify the fairest options.
Of course, the real world is much more complex than this example, and Tan is the first to admit that defining fairness for AI is easier said than done. But he has help, especially from the chairman of his department at MSU, Abdol-Hossein Esfahanian.
Esfahanian is an expert in a field known as applied graph theory which helps model connections and relationships. He also enjoys learning related fields in computer science and is known to attend classes given by his colleagues, as long as they are comfortable having him there.
“Our faculty is fantastic for imparting knowledge,” said Esfahanian. “I needed to learn more about data mining so I took a course from Dr. Tan for a semester. From that point on, we started to communicate about research issues. ”
Today, Esfahanian is a co-investigator of the NSF and Amazon grant.
“Algorithms are created by people and people usually have biases, so those biases creep in,” he said. “We want equity to be everywhere and we want to better understand how to measure it.”
The team is making progress on this front. Last November, they presented their work at an online meeting hosted by NSF and Amazon as well as at a virtual international conference hosted by the Institute of Electrical and Electronics Engineers.
Tan and Esfahanian both said the community – and funders – were excited about the Spartans’ progress. But the two researchers also admitted that they were just getting started.
“This is an ongoing research. There are a lot of issues and challenges. How do you define fairness? How can you help people trust these systems we use every day? Tan said. “Our job as researchers is to find solutions to these problems.”
Warning: AAAS and EurekAlert! are not responsible for the accuracy of any press releases posted on EurekAlert! by contributing institutions or for the use of any information via the EurekAlert system.