Thesis defence: Yongchao Wu
Welcome to a thesis defence at DSV! In his PhD thesis, Yongchao Wu explores how pretrained language models and AI can be used in education.
Thesis defence
Date:
Monday 16 December 2024Time:
09.00 – 12.00Location:
L30, DSV, Borgarfjordsgatan 12, KistaOn December 16, 2024, Yongchao Wu will present his PhD thesis at the Department of Computer and Systems Sciences (DSV), Stockholm University. The title of the thesis is “Exploring the Educational Utility of Pretrained Language Models”.
PhD student: Yongchao Wu, DSV
Opponent: Filip Ginter, University of Turku, Finland
Main supervisor: Aron Henriksson, DSV
Supervisors: Jalal Nouri and Martin Duneld, DSV
Download the PhD thesis from Diva
The defence takes place at DSV in Kista, starting at 09:00.
Find your way to DSV
Abstract
The emergence of pretrained language models has profoundly reshaped natural language processing, serving as foundation models for a wide range of tasks. Over the past decade, pretrained language models have evolved significantly, leading to the development of different types of models and approaches for utilising them. This progression spans from static to contextual models and from smaller models to more powerful, generative large language models. The increasing capabilities of these models have, in turn, led to growing interest in exploring new use cases and applications across various domains, including education, where digitalisation has created opportunities for AI applications that leverage pretrained language models, particularly due to the abundance of text data in educational contexts.
This thesis explores the educational utility of pretrained language models, specifically by investigating how different paradigms of these models can be applied to address tasks in education. These paradigms include various methodologies for leveraging the knowledge embedded in pretrained language models, such as embeddings, fine-tuning, prompt-based learning, and in-context learning.
For collaborative learning group formation, a clustering approach based on pretrained embeddings is proposed, enabling the creation of either homogeneous or heterogeneous groups depending on the specific learning situation. For automated essay scoring, a pretrained language model is fine-tuned using both the essay instructions and the essay text as input; the proposed method also highlights key topical sentences that contribute to the predicted essay score. For educational question generation, a method based on prompt-based learning is introduced and shown to be more data-efficient than existing methods. Finally, for educational question answering, certain limitations of the in-context learning (or prompting) paradigm, such as a tendency of large language models to hallucinate or miscalculate, are addressed.
Specifically, workflows and prompting strategies based on retrieval-augmented generation and tool-augmented generation are proposed, allowing large language models to ground answers in specific learning materials and to leverage external tools, such as calculators and knowledge bases, within chain-of-thought reasoning processes. These strategies are shown to produce more reliable and transparent answers to complex questions.
Through five empirical studies, methodological innovations within each paradigm of pretrained language models are proposed and evaluated for specific educational use cases. In addition to contributing methodologically to natural language processing, the results demonstrate the potential utility of pretrained language models in educational AI applications, thereby advancing the field of technology enhanced learning. The proposed methods not only improve predictive performance on specific tasks but also aim to enhance the transparency of pretrained language models, which is essential for building reliable and trustworthy educational AI applications.
Keywords
Natural Language Processing, Technology Enhanced Learning, Pretrained Language Models, Large Language Models, Generative AI, Collaborative Learning, Automated Essay Scoring, Educational Question Generation, Educational Question Answering
Last updated: 2024-11-21
Source: Department of Computer and Systems Sciences, DSV