Research group Natural Language Processing Research Group
The Natural Language Processing Research Group develops, applies and evaluates NLP methods, in particular involving large language models, across various domains. We focus on topics such as privacy, explainability, and domain adaptation.
The Natural Language Processing Research Group carries out research on topics concerning methods for processing, modeling and analyzing text, including large language models (LLMs). We are motivated by real-world applications of Natural Language Processing (NLP) in domains such as healthcare, education, and security.
We have built up extensive expertise in clinical NLP for analyzing healthcare data and, to that end, have a research infrastructure called Health Bank. Clinical NLP methods enable automatic large-scale analysis of healthcare data and are valuable for improving healthcare, for example by building clinical prediction models that incorporate information from clinical notes. We explore how LLMs can be used in healthcare and apply domain adaptation for creating clinical language models, especially using privacy-preserving NLP methods (such as de-identification or synthetic training data).
Another focus application area is education, where we are interested in using pre-trained language models for different educational use cases – automated essay scoring, question answering/generation and educational content recommendation – to enhance teaching and learning processes. To enable development of intelligent and adaptive learning systems, we explore techniques such as retrieval-augmented generation (RAG) and tool-augmented generation (TAG).
Security is another application area where our focus is on detection and analysis of online harms such as hate speech, threats, and violent extremist content. We are also interested in threat assessment of written communication and to determine the seriousness of threats. We host the European Online Hate Lab, a hub for researchers and organisations that detect and analyse online hate.
Furthermore, explainability is critical for the development of trustworthy AI, therefore we develop methods for explainable NLP, especially in relation to LLMs. We focus especially, but not exclusively, on developing NLP methods for the Swedish language.
Group members
Group managers
Hercules Dalianis
Professor
Aron Henriksson
Associate professor
Members
Tony Lindgren
Unit head SAS
Martin Duneld
Universitetslektor
Eriks Sneiders
Universitetslektor
Lisa Kaati
Universitetslektor
Amin Jalali
Associate Professor
Workneh Yilma Ayele
Utbildningsassistent
Andrea Andrenucci
Studievägledare, Studierektor grundnivå och avancerad nivå
Eric Svee
Universitetslektor
Xiu Li
Teaching assistant
Yongchao Wu
Doktorand
Thomas Vakili
PhD student
Korbinian Robert Randl
PhD student
Lukas Lundmark
PhD student
Martin Hansson
Amanuens
Ioannis Pavlopoulos
Affiliated researcher