Stockholms universitet

Clàudia Figueras JuliánUtbildningsassistent

Om mig

Jag är doktorand vid Institutionen för data- och systemvetenskap (DSV) vid Stockholms universitet. Min forskning är placerad inom människa-datorinteraktion (HCI) och datorstöddt samarbete (CSCW), med fokus på de etiska och sociala dimensionerna av AI i offentlig sektor. Jag undersöker hur design och utveckling av AI-system går till i praktiken, och särskilt hur praktiker tolkar och omsätter etiska värden i sitt vardagliga arbete. Genom att lyfta fram situerade perspektiv belyser min forskning de utmaningar och förhandlingar som är inblandade i att göra AI-system ansvarsfulla och tillförlitliga.

Innan jag påbörjade min forskarutbildning tog jag en masterexamen i hälsoinformatik vid Karolinska Institutet och Stockholms universitet samt en kandidatexamen i humanbiologi vid Universitat Pompeu Fabra. Jag har även arbetat som data scientist och dataanalytiker inom både offentlig och privat sektor.

Undervisning

Undervisningserfarenhet:

  • Utbildningsassistent för "Etik i datavetenskaplig forskning" (HT 2023).
  • Utbildningsassistent för "Empirisk forskningsmetodik för datavetenskap" (HT 2020-2024).
  • Utbildningsassistent för "Vetenskaplig metodologi och kommunikation i datavetenskap" (HT 2024).
  • Utbildningsassistent för "Vetenskaplig kommunikation och forskningsmetodik" (HT 2023 & HT 2024).
  • Handledning av kandidat- och masteruppsatser (2022-2025).

Forskningsprojekt

Publikationer

I urval från Stockholms universitets publikationsdatabas

  • Exploring tensions in Responsible AI in practice. An Interview Study on AI practices in and for Swedish Public Organizations

    2022. Clàudia Figueras Julián, Harko Henricus Verhagen, Teresa Cerratto-Pargman. Scandinavian Journal of Information Systems 34 (2), 199-232

    Artikel

    The increasing use of Artificial Intelligence (AI) systems has sparked discussions regarding developing ethically responsible technology. Consequently, various organizations have released high-level AI ethics frameworks to assist in AI design. However, we still know too little about how AI ethics principles are perceived and work in practice, especially in public organizations. This study examines how AI practitioners perceive ethical issues in their work concerning AI design and how they interpret and put them into practice. We conducted an empirical study consisting of semi-structured qualitative interviews with AI practitioners working in or for public organizations. Taking the lens provided by the “In-Action Ethics” framework and previous studies on ethical tensions, we analyzed practitioners’ interpretations of AI ethics principles and their application in practice. We found tensions between practitioners’ interpretation of ethical principles in their work and ‘ethos tensions.’ In this vein, we argue that understanding the different tensions that can occur in practice and how they are tackled is key to studying ethics in practice. Understanding how AI practitioners perceive and apply ethical principles is necessary for practical ethics to contribute toward an empirically grounded, Responsible AI.

    Läs mer om Exploring tensions in Responsible AI in practice. An Interview Study on AI practices in and for Swedish Public Organizations
  • Trustworthy AI for the People?

    2021. Clàudia Figueras Julián, Harko Henricus Verhagen, Teresa Ceratto Pargman. AIES '21, 269-270

    Konferens

    While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.

    Läs mer om Trustworthy AI for the People?
  • Promises and breakages of automated grading systems: a qualitative study in computer science education

    2025. Clàudia Figueras Julián (et al.). Education Inquiry, 1-22

    Artikel

    Automated grading systems (AGSs) have gained attention for their potential to streamline assessment in higher education. However, their integration into university assessment practice poses challenges, particularly for teachers in computer science seeking to balance their workload while ensuring an adequate and fair assessment of students’ programming skills and knowledge. The present study focuses on individuals with expertise in developing, using, and researching AGSs in higher education, whom we refer to as “AGS experts”. Through semi-structured interviews, we examine how the AGSs they engage with impact their work and assessment practices in computer science education. Drawing on the concept of breakages, we argue that while AGS experts invest time and effort in developing these systems, enticed by the promises of more efficient workload management and improved assessment practices, the actual use may introduce tensions leading to breakages disrupting assessment practices. Our findings illustrate the complexities and the potential impact the deployment of AGS brings to assessment practices within a public university setting and discuss the implications for future research. 

    Läs mer om Promises and breakages of automated grading systems
  • Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder Exploration

    2024. Clàudia Figueras Julián, Chiara Rossitto, Teresa Cerratto-Pargman. NordiCHI '24

    Konferens

    Automated Grading Systems (AGSs) are increasingly used in higher education assessment practices, raising issues about the responsibilities of the various stakeholders involved both in their design and use. This study explores how teachers, students, exam administrators, and developers of AGSs perceive and enact responsibilities around such systems. Drawing on a focus group and interview data, we applied Fuchsberger and Frauenberger’s [27] notion of Doing Responsibilities as an analytical lens. This notion, framing responsibility as shared among human and nonhuman actors (e.g., technologies and data), has guided our analysis of how responsibilities are continuously configured and enacted in university assessment practices. The findings illustrate the stakeholders’ perceived and enacted responsibilities at different phases, contributing to the HCI literature on Responsible AI and AGSs by presenting a practical application of the ‘Doing Responsibilities’ framework before, during and after design. We discuss how the findings enrich this notion, emphasising the importance of engaging with nonhumans, considering regulatory aspects of responsibility, and addressing relational tensions within automation.

    Läs mer om Doing Responsibilities with Automated Grading Systems
  • Who Should Act? Distancing and Vulnerability in Technology Practitioners' Accounts of Ethical Responsibility

    2024. Kristina Popova (et al.). Proceedings of the ACM on Human-Computer Interaction (PACMHCI) 8 (CSCW1)

    Artikel

    Attending to emotion can shed light on why recognizing an ethical issue and taking responsibility for it can be so demanding. To examine emotions related to taking or not taking responsibility for ethical action, we conducted a semi-structured interview study with 23 individuals working in interaction design and developing AI systems in Scandinavian countries. Through a thematic analysis of how participants attribute ethical responsibility, we identify three ethical stances, that is, discursive approaches to answering the question 'who should act': an individualized I-stance ("the responsibility is mine"), a collective we-stance ("the responsibility is ours"), and a distanced they-stance ("the responsibility is someone else's"). Further, we introduce the concepts of distancing and vulnerability to analyze the emotion work that these three ethical stances place on technology practitioners in situations of low- and high-scale technology development, where they have more or less control over the outcomes of their work. We show how the we- and they-stances let technology practitioners distance themselves from the results of their activity, while the I-stance makes them more vulnerable to emotional and material risks. By illustrating the emotional dimensions involved in recognizing ethical issues and embracing responsibility, our study contributes to the field of Ethics in Practice. We argue that emotions play a pivotal role in technology practitioners' decision-making process, influencing their choices to either take action or refrain from doing so.

    Läs mer om Who Should Act? Distancing and Vulnerability in Technology Practitioners' Accounts of Ethical Responsibility

Visa alla publikationer av Clàudia Figueras Julián vid Stockholms universitet

profilePageLayout