How can ChatGPT impact higher education? That was one of the questions discussed during the event with Centre for the Advancement of University Teaching. Photo: Iryna Khabliuk/Mostphotos


The event attracted many participants, with more than 100 people attending on-site and via Zoom. 
Cormac McGrath, senior lecturer and associate professor at the Department of Education and active at CeUL, started the day with some reflections on how phenomena such as OpenAI and ChatGPT can impact higher education practices, with a view on new technologies, but also the relationship between teachers and students.

“I have already noticed that they affect the way I read student’s texts. I suddenly feel doubtful: Can I trust that this text is really written by a student, or is it perhaps written by a chatbot?”

Cormac McGrath, senior lecturer and associate professor at the Department of Education. Photo: Jonas Collin/Stockholm University

Cormac McGrath also mentioned that there is always hype around new technology and that he recognizes the discussion and the fears that new technological tools may replace teachers.

“About ten years ago, so-called MOOCs, massive open online courses, were popular. When they appeared, many were worried that it would mean the end of universities and the end of teachers, but we have not been replaced yet!”

Even if the hype around OpenAI and ChatGPT may start to subside, he said that it is important to discuss what is driving the development of the new technology in an educational context and how this affects both teachers and students.
 
Teresa Cerratto-Pargman, professor at the Department of Computer and Systems Sciences, spoke in her presentation about the implications of AI in higher education, highlighting in particular the social dimensions that the new technology mediates, such as what happens to trust in students and student learning when AI delivers ready-made answers to students.
“In many ways, technology can help and support students, but technology, especially ChatGPT, is not interested in students’ learning, it is based on language models that deal with language form and not language meaning. These models simultaneously shape and challenge the relationships between teachers and students that are central to learning and teaching.”

“How do we want the relationships to be in the future in higher education? How should we work with this? It is important to discuss”, she said.
 
Robert Östling, associate professor at the Department of Linguistics, does, among other things, research on machine translation and computer-assisted language learning. He presented a historical review of how technological development in the field has looked in recent years. A number of interacting factors have led to the fact that AI systems have been able to grow by the thousandfold in size in just a few years.
 
“You can say that the technology basically works like the autocorrection when we text in our mobiles, but that it has gradually become more advanced. The technology has been scaled up so that it can summarize texts, draw some logical conclusions and even something as human as explaining jokes.”
 

During the panel discussion, researchers Alexandra Farazouli and Teresa Cerratto-Pargman discussed various aspects of AI chatbots with moderator Andreas Jemstedt. Photo: Jonas Collin/Stockholm University


Alexandra Farazouli, PhD student at the Department of Education, spoke about an ongoing research project that investigates how AI and chatbots affect teachers’ assessments of texts in higher education. In one study, a number of teachers at the Department of Education, the Department of Philosophy, the Department of Law and the Department of Sociology had to read six different texts.

“Three texts were written by students and three texts by chat bots. The teachers were asked to grade them.”

The test showed that most chatbot texts were given a passing grade. However, what made the teachers suspicious of some texts was that the answers were sometimes irrelevant or too inventive, that the quality of the texts was low, and that the use of references often was incorrect.

“Sometimes the texts contained repetitions, and there was a lack of quality in language and reasoning. A style that contains a lot of lists and enumerations is also revealing, it is typical of a chatbot, or that the references are irrelevant.”

An interesting effect was that even the “real” student answers were more critically examined by the teachers.

“Could it be that the new technical tools lead to increased suspicion between teachers and students? It is interesting to investigate this relationship between teachers and students”, said Alexandra Farazouli.
 
After the researchers’ presentations and a panel debate led by Andreas Jemstedt from CeUL, a workshop followed where the participants had the opportunity to increase their understanding of what AI chatbots are. Here, space was given to think about how teachers can use them and how they might not want to use them in their teaching after all. What are the risks and benefits of chatbots in higher education and how do teachers create strategies for the future? Two concrete questions that the participants discussed were: How can teachers act if there is a suspicion that AI has been used in an examination task and in what way can SU’s guidelines be a support?
 

More information

Presentations from the event in the form of powerpoint and video films.

Guidelines on using AI-powered chatbots in education and research.

Upcoming workshop on AI in education:

Understanding the Impact of AI in University Education
Time: 12 May at 9-12
Venue: Online via Zoom