Research project AI for society: Towards socially just algorithmic decision making
This project aims to make AI fair, transparent, and just, ensuring it benefits everyone in society. By addressing biases in healthcare and education, we’re working to build AI systems that serve society equitably and responsibly.

This project aims to make AI systems fair, transparent, and accountable in decisions affecting society. Artificial intelligence holds great promise, especially in healthcare, education, and law enforcement, but it can also reinforce societal biases if not carefully managed. For instance, AI-based grading might unfairly disadvantage certain student groups, or healthcare algorithms could overlook needs in marginalized communities. Our team tackles these issues by developing methods to ensure AI systems work fairly across social contexts and diverse populations.
Our research involves three key areas:
1. Designing bias-free AI by embedding fairness from the start.
2. Making AI decisions transparent and explainable.
3. Testing these principles in real-world education and healthcare scenarios.
By understanding the roots of bias in AI, we can help AI systems offer equitable opportunities to everyone. This project combines expertise from technology, law, ethics, and social sciences to create AI that genuinely serves society’s best interests.
Project description
Artificial Intelligence (AI) is transforming fields like healthcare, education, and criminal justice. However, AI algorithms risk reinforcing social biases, particularly when data mirrors historical inequities. This project addresses the scientific and societal challenges of creating AI that’s equitable, transparent, and accountable. By focusing on fairness, explainability, and real-world application, we aim to bridge the gap between technical development and societal values.
The Challenge: Bias in AI Systems
AI systems, often trained on historical data, can inherit and amplify biases. In healthcare, for example, predictive models might overlook critical needs of underrepresented groups, while educational algorithms used for grading may favor students from privileged backgrounds. Biases in these systems can become entrenched, creating feedback loops where unfair outcomes reinforce future algorithmic decisions. This project seeks to identify and address these biases to make AI tools that are fair from the outset.
Objectives
1. Fair AI by Design
Our first objective is to design AI systems that embed fairness from the start by examining the social contexts of the data they use. We analyze the dynamics of bias, investigate AI’s role in perpetuating historical discrimination, and explore how to incorporate equity, justice, and solidarity into AI design.
2. Transparency and Accountability in AI
Transparency is critical for public trust and accountability. We develop techniques to make AI decisions interpretable, especially in high-impact areas like healthcare. For instance, by using counterfactual reasoning, we aim to ensure that AI systems can provide clear justifications for their outputs, which can be examined and validated.
3. Validation in Education and Healthcare
Our methods are tested in real-world case studies, focusing on automated grading in education and diagnostic support in healthcare. These applications help assess AI fairness in diverse environments, from classrooms to clinics.
Scientific Impact and Societal Relevance
By focusing on interdisciplinary approaches, the project connects data science, ethics, law, and social sciences to create actionable frameworks for AI fairness. This ensures that AI systems respect legal requirements and align with societal goals, supporting a future where technology is a tool for positive societal change.
Project members
Project managers
Panagiotis Papapetrou
Professor, Deputy Head of department

Sindri Magnússon
Senior Lecturer, Associate Professor

Members
Teresa Cerratto-Pargman
Professor

Stanley Joel Greenstein
Universitetslektor, docent
