Clàudia Figueras JuliánTeaching Assistant
About me
I am a PhD candidate at the Department of Computer and Systems Sciences (DSV) at Stockholm University. My research is situated within Human-Computer Interaction (HCI) and Computer-Supported Cooperative Work (CSCW), with a focus on the ethical and social dimensions of AI in the public sector. I explore the design and development of AI systems in practice, examining how practitioners interpret and enact ethical values in their everyday work. By foregrounding situated perspectives, my work highlights the challenges and negotiations involved in making AI systems responsible and trustworthy.
Before starting my PhD, I completed an MSc in Health Informatics from Karolinska Institutet and Stockholm University, and a BSc in Human Biology at Universitat Pompeu Fabra. I have also worked as a data scientist and data analyst in both public and private sector organisations.
Teaching
Teaching Experience:
- Teaching Assistant for “Ethics in Computer and Systems Sciences Research” (HT 2023).
- Teaching Assistant for “Empirical Research Methodology for Computer and Systems Sciences” (HT 2020-2024).
- Teaching Assistant for “Scientific Methodology and Communication in Computer and Systems Science” (HT 2024).
- Teaching Assistant for “Scientific Communication and Research Methodology” (HT 2023 & HT 2024).
- Bachelor’s and Master’s Thesis Supervision (2022-2025).
Research projects
Publications
A selection from Stockholm University publication database
-
Exploring tensions in Responsible AI in practice. An Interview Study on AI practices in and for Swedish Public Organizations
2022. Clàudia Figueras Julián, Harko Henricus Verhagen, Teresa Cerratto-Pargman. Scandinavian Journal of Information Systems 34 (2), 199-232
ArticleThe increasing use of Artificial Intelligence (AI) systems has sparked discussions regarding developing ethically responsible technology. Consequently, various organizations have released high-level AI ethics frameworks to assist in AI design. However, we still know too little about how AI ethics principles are perceived and work in practice, especially in public organizations. This study examines how AI practitioners perceive ethical issues in their work concerning AI design and how they interpret and put them into practice. We conducted an empirical study consisting of semi-structured qualitative interviews with AI practitioners working in or for public organizations. Taking the lens provided by the “In-Action Ethics” framework and previous studies on ethical tensions, we analyzed practitioners’ interpretations of AI ethics principles and their application in practice. We found tensions between practitioners’ interpretation of ethical principles in their work and ‘ethos tensions.’ In this vein, we argue that understanding the different tensions that can occur in practice and how they are tackled is key to studying ethics in practice. Understanding how AI practitioners perceive and apply ethical principles is necessary for practical ethics to contribute toward an empirically grounded, Responsible AI.
-
Trustworthy AI for the People?
2021. Clàudia Figueras Julián, Harko Henricus Verhagen, Teresa Ceratto Pargman. AIES '21, 269-270
ConferenceWhile AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.
-
Promises and breakages of automated grading systems: a qualitative study in computer science education
2025. Clàudia Figueras Julián (et al.). Education Inquiry, 1-22
ArticleAutomated grading systems (AGSs) have gained attention for their potential to streamline assessment in higher education. However, their integration into university assessment practice poses challenges, particularly for teachers in computer science seeking to balance their workload while ensuring an adequate and fair assessment of students’ programming skills and knowledge. The present study focuses on individuals with expertise in developing, using, and researching AGSs in higher education, whom we refer to as “AGS experts”. Through semi-structured interviews, we examine how the AGSs they engage with impact their work and assessment practices in computer science education. Drawing on the concept of breakages, we argue that while AGS experts invest time and effort in developing these systems, enticed by the promises of more efficient workload management and improved assessment practices, the actual use may introduce tensions leading to breakages disrupting assessment practices. Our findings illustrate the complexities and the potential impact the deployment of AGS brings to assessment practices within a public university setting and discuss the implications for future research.
-
Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder Exploration
2024. Clàudia Figueras Julián, Chiara Rossitto, Teresa Cerratto-Pargman. NordiCHI '24
ConferenceAutomated Grading Systems (AGSs) are increasingly used in higher education assessment practices, raising issues about the responsibilities of the various stakeholders involved both in their design and use. This study explores how teachers, students, exam administrators, and developers of AGSs perceive and enact responsibilities around such systems. Drawing on a focus group and interview data, we applied Fuchsberger and Frauenberger’s [27] notion of Doing Responsibilities as an analytical lens. This notion, framing responsibility as shared among human and nonhuman actors (e.g., technologies and data), has guided our analysis of how responsibilities are continuously configured and enacted in university assessment practices. The findings illustrate the stakeholders’ perceived and enacted responsibilities at different phases, contributing to the HCI literature on Responsible AI and AGSs by presenting a practical application of the ‘Doing Responsibilities’ framework before, during and after design. We discuss how the findings enrich this notion, emphasising the importance of engaging with nonhumans, considering regulatory aspects of responsibility, and addressing relational tensions within automation.
-
Who Should Act? Distancing and Vulnerability in Technology Practitioners' Accounts of Ethical Responsibility
2024. Kristina Popova (et al.). Proceedings of the ACM on Human-Computer Interaction (PACMHCI) 8 (CSCW1)
ArticleAttending to emotion can shed light on why recognizing an ethical issue and taking responsibility for it can be so demanding. To examine emotions related to taking or not taking responsibility for ethical action, we conducted a semi-structured interview study with 23 individuals working in interaction design and developing AI systems in Scandinavian countries. Through a thematic analysis of how participants attribute ethical responsibility, we identify three ethical stances, that is, discursive approaches to answering the question 'who should act': an individualized I-stance ("the responsibility is mine"), a collective we-stance ("the responsibility is ours"), and a distanced they-stance ("the responsibility is someone else's"). Further, we introduce the concepts of distancing and vulnerability to analyze the emotion work that these three ethical stances place on technology practitioners in situations of low- and high-scale technology development, where they have more or less control over the outcomes of their work. We show how the we- and they-stances let technology practitioners distance themselves from the results of their activity, while the I-stance makes them more vulnerable to emotional and material risks. By illustrating the emotional dimensions involved in recognizing ethical issues and embracing responsibility, our study contributes to the field of Ethics in Practice. We argue that emotions play a pivotal role in technology practitioners' decision-making process, influencing their choices to either take action or refrain from doing so.
Show all publications by Clàudia Figueras Julián at Stockholm University
PhD candidate on AI ethics.