AI is not always fair – we need to solve the ethical dilemmas
Machines lack human intelligence, and relying on them for important decisions can therefore be dangerous. AI may discriminate, lie and make mistakes – without feeling guilty about it. Clàudia Figueras’s research on ethics and AI shows how we can approach the problem.

Hello Clàudia, please tell us about your work!
“Sure! My research looks at how people working in Swedish public organisations, like government agencies and universities, deal with ethical questions when they develop or use AI systems. AI is often presented as something neutral and efficient but, in reality, it always involves choices about ethical values. For example: Who gets access to a service? What data is used? How are mistakes handled? And how do we deal with uncertain situations where there isn’t a clear right answer? All these choices have ethical consequences.”
Can you give an example?
“Think about an AI system that is used to decide who should get financial support. This system might unintentionally disadvantage people with unusual life situations that don’t fit the data patterns. That’s not just a technical problem; it’s an ethical one as it affects people’s rights and well-being.”
“For a real-world example of unethical consequences, look at the UK’s A-level grading algorithm scandal in 2020. Because exams were cancelled during the COVID-19 pandemic, regulators used an algorithm to assign grades. The system was meant to prevent grade inflation, but it ended up downgrading many high-performing students from lower-income schools, reinforcing inequality and sparking public outcry. This shows just how high the stakes are when AI ethics go wrong in the public sector.”

What did your study of AI systems in government agencies show?
“I found that ethics is not something ‘added on’ at the end of AI development and deployment. It is part of the everyday work of AI practitioners since they constantly make small and big ethical decisions. For instance, they spend time discussing how to balance efficiency with fairness, or how to make decisions transparent to the public. This ‘ethical labour’ is often invisible, but it is essential.”
“I also learned that ethical dilemmas are not just problems or obstacles to overcome; they can spark important reflection. For example, when a system promises efficiency but risks overlooking care and fairness, people have to stop and negotiate what really matters. Finally, responsibility is not a fixed role. It shifts between individuals, teams, and institutions, and is continuously ‘done’ in practice rather than simply assigned on paper.”
What are the implications of your research results?
“The main implication is that organisations cannot rely only on checklists or high-level ethical frameworks to ‘solve’ ethics. They need to create space and support for practitioners to reflect, question, and sometimes disagree. That means leadership that values ethics as much as technical performance, and also resources – time, training, forums – that make this possible. For society, accountability should not mean finding one person to blame when something goes wrong. It’s a shared process that continues throughout the lifetime of an AI system. This perspective can also inform more nuanced regulations and help build public trust in institutions that use AI.”
In your opinion – are organisations today just eager to “hop on the AI train”, or do you see a growing awareness?
“It’s a bit of both. There is clearly growing awareness of ethics, and many practitioners genuinely try to act responsibly. But there is also pressure to adopt AI quickly, sometimes without asking deeper questions. As one of my interviewees put it: ‘Should we even be doing this at all?’”
“In practice, organisations often focus on what is easiest to implement or measure, like efficiency. More complex issues, such as fairness or long-term consequences for citizens, can be sidelined. Responsibility is often pushed down to individual practitioners, who may lack the power or support to address bigger structural problems. So, while awareness has grown, the challenge is to turn it into everyday practices and cultures of responsibility that last beyond the hype.”
How did you become interested in this area?
“Before my PhD, I worked in data annotation for a company developing AI systems. My team was invisible compared to the engineers, which showed me how hidden but crucial this work is. One day, during testing, I realised the system didn’t work properly for people with darker skin. The model had only been trained on images of light-skinned people. That moment was a shock. It made me see AI not as a neutral tool but as something fragile and vulnerable, shaped by choices and biases. Discovering the work of researchers like Joy Buolamwini, Timnit Gebru, and Cathy O’Neil confirmed how urgent these issues were. When I later saw a PhD opening at DSV on AI ethics, I knew it was the right path. DSV, with its focus on the societal side of technology, was an excellent place for me to explore these questions.”
What happens next for you?
“Right now, I am focusing on the defence and wrapping up the PhD journey. Looking ahead, I want to keep working at the intersection of research and practice, helping ensure AI benefits society in fair and responsible ways. That could mean continuing in academia, or taking a role where I help organisations or policymakers put ethical principles into practice. For now, I’m open to both paths”, says Clàudia Figueras.
More about Clàudia’s research
Clàudia Figueras will defend her PhD thesis at the Department of Computer and Systems Sciences (DSV), Stockholm University, on October 9, 2025.
See the invitation to the defence
The title of the thesis is “Ethical Tensions in AI-Based Systems”.
The thesis can be downloaded from Diva
Christopher Frauenberger, Interdisciplinary Transformation University, Austria, is the external reviewer at the defence.
Main supervisor for the thesis is Chiara Rossitto, DSV. Supervisor is Teresa Cerratto-Pargman, DSV.
Contact Clàudia Figueras
Contact Chiara Rossitto
Contact Teresa Cerratto-Pargman
Text: Åse Karlén
Last updated: October 1, 2025
Source: Department of Computer and Systems Sciences, DSV