Disputation: Razan Jaberibraheem

Disputation

Datum: fredag 12 januari 2024

Tid: 13.00 – 17.00

Plats: Lilla Hörsalen, DSV, Borgarfjordsgatan 12, Kista, och Zoom

Välkommen till en disputation på DSV! Razan Jaberibraheem presenterar sin avhandling som handlar om hur sociala robotar kan bli bättre på att förstå mänsklig kommunikation.

12 januari 2024 presenterar Razan Jaberibraheem sin doktorsavhandling på Institutionen för data- och systemvetenskap (DSV) vid Stockholms universitet. Titeln är ”Towards Designing Better Speech Agent Interaction: Using Eye Gaze for Interaction”.

Razan Jaberibraheem, doktorand på Institutionen för data- och systemvetenskap (DSV).
Razan Jaberibraheem i samband med att hon spikade sin avhandling på DSV. Foto: Donald McMillan.

Disputationen genomförs i DSVs lokaler i Kista, med start klockan 13.00.
Hitta till DSV

Du kan också delta på distans:
Zoomlänk finns här
Kontakta Karey Helms för att få lösenordet.

Hela avhandlingen kan laddas ner från Diva

 

Doktorand: Razan Jaberibraheem, DSV
Opponent: Kerstin Fischer, University of Southern Denmark
Huvudhandledare: Barry Brown, DSV
Handledare: Donald McMillan, DSV

 


Sammanfattning (på engelska)

This research is about addressing the need to better understand interaction with conversational user interfaces (CUIs) and how human-technology ’conversations’ can be improved by drawing on the lessons learned from human-human interaction. It focuses on incorporating abstractions of complex human behaviour, specifically gaze, to enhance interactions with speech agents in conversations. Across four empirical studies, a mix of methods is used to look closely at the interaction between the user and the system.

I offer empirical and conceptual contributions for interaction designers and researchers. First, I present a novel speech interface, Tama, which is a gaze-aware speech agent designed to explore the use of gaze in conversational interactions with smart speakers. Second, I present the empirical contributions, that is, the studies that document the interactions with and around speech interfaces, including ongoing, non-system-directed speech. A moment-by-moment analysis of these interactions highlights the opportunities that the gaze offers as a modality to enhance the interaction with the speech agent, as well as the problems and limitations when such a modality is used. The third contribution is a conceptual contribution made by providing perspective on minimal anthropomorphic design. This produces interactions that are not human-like in terms of behaviour but do take advantage of the skills used in human interaction as a key to advancing interactions with speech agents.

Based on my research work and contributions, I reflect upon advancing interactions with speech interfaces, focusing on what different technologies can offer and the possibility of taking the next step in designing CUIs. I then discuss the need to bridge the work of different fields (i.e. conversation analysis (CA), human-computer interaction (HCI), and human-robot interaction (HRI)) to combine models and approaches from all these fields in order to guide designers building speech systems. I see three competing yet complementary interaction paradigms across CUIs. I call these paradigms Direct Speech Interaction, Agent-Mediated Interaction, and Para-Speech Interaction. Each of these paradigms has specific challenges and opportunities for interaction.