Transparent AI – towards just decisions

Artificial intelligence holds great promise, especially in healthcare, education, and law enforcement. But it can also reinforce societal biases if not carefully managed. How can AI be made fair and transparent so that it can serve society in a responsible way?

Image of a scale


Panagiotis Papapetrou, researcher at The Department of Computer and Systems Sciences at Stockholm University is together with his team examining these questions in a new research project, “AI for society: Towards socially just algorithmic decision making”.

“One could see AI as a little baby that is slowly growing up, trying to make associations and learning by observing its environment. The way we can make AI systems fair is by giving them a set of rules and a set of constraints that map or reflect what fairness means in the current system or society,” says Panagiotis Papapetrou.

One of the researchers’ goal is to design AI systems that embed fairness from the start, a “bias-free AI”. The next step is to convince experts, for example in health care and in education, to adopt these bias-protected AI-tools.


A well-known problem with AI, is that it builds its knowledge on historical data that contains lots of social biases, thus inequities are likely to be maintained and even reinforced.

“So let's say – when I want to hire a new employee in my organization, I should make sure that I don't discriminate against gender. If there have been previous cases where a majority of hired employees belong to a particular gender, then AI will tend to learn this bias, because it will consider it as a type of hiring criteria and it will integrate it to its hiring process.”

AI-based, grading might unfairly disadvantage certain student groups and health care algorithms could overlook needs in marginalized communities.

“You can think of these deep learning algorithms as human brains. They're neurons. They're fully connected, or partly connected with each other. So, it is really impossible to understand how they work exactly and what they learn.”

Given a certain input, they can produce the correct output. Given a patient configuration in the hospital, they can tell whether that patient should take treatment A or treatment.

“But we don't really understand or we cannot really see inside how they make that decision. We cannot understand their logic.”

To address these issues, Panos’ team is developing methods that helps us understand the roots of bias in AI. The first step is by making AI transparent.

“We want them to reveal to us what rules they are using to take their decisions. Also, we want them to make sure that they do not learn the biases that are in the data by giving them new rules. Such as, your decision whether someone gets a loan or doesn't get a loan, should not be affected by this certain list of attributes,” says Panagiotis Papapetrou.

Facts

The researchers’ methods are tested in real-world case studies, focusing on automated grading in education and diagnostic support in healthcare. These applications help assess AI fairness in diverse environments, from classrooms to clinics.

By focusing on interdisciplinary approaches, the project connects data science, ethics, law, and social sciences to create new frameworks for AI fairness.

Last updated: 2026-03-03

Source: Communications Office