Timing challenge for self-driving cars
How do we actually know who should go first in a roundabout? Often, it’s not about traffic signs or formal rules, but about eye contact, small movements, and timing. Humans handle this intuitively, but for self-driving cars it is a major challenge.
At the Department of Computer and Systems Sciences at Stockholm University, research has been underway for several years on how computers might understand human social interaction. One of the research projects, “AI in motion: Studying the social world of autonomous vehicles,” focuses on how self-driving cars can learn to interact with people in traffic.
To show our intentions in traffic, we sometimes drive slowly and sometimes quickly. This is a fundamental skill in driving a car. Computers, however, struggle to understand the kind of social interaction that takes place between people. So, in this research project, we have really focused on that particular problem. If we are going to have a self-driving system controlled by a computer, it has to be able to understand the social interactions on the road,
says Barry Brown, the project’s principal investigator.
Barry Brown is Professor of human–computer interaction, with a background in computer science and sociology. But his interest in self-driving cars started from a completely different angle.
I’m a really bad driver and I used to get lost all the time. So, during my time as a researcher at the University of California San Diego between 2007–2012, I became increasingly interested in the navigation device ‘TomTom,’ which was often found in cars at the time. I asked my students to drive around San Diego navigating and filming themselves. That’s where my interest in driving and navigating began.
How do humans interpret machines?

Researcher Hannah Pelikan have filmed self-driving shuttle buses by riding them over a long period of time. Photo: Magnus Johansson, LiU
In the project, researchers from Stockholm University collaborate with researchers from Linköping University. Hannah Pelikan is one of the researchers involved. She is an Assistant Professor in Language and Culture, with focus on human-robot interaction, and is particularly interested in how people interpret and understand machine behaviour in everyday environments. In the project, the method was to film people as they encounter self-driving vehicles.
I’ve filmed self-driving shuttle buses by riding them over a long period of time, observing how they move, and filming and documenting the interactions from different camera angles,
Hannah explains.
Using these videos, the researchers can return to the recordings and identify the moments when interaction does not run smoothly. They look for specific situations where something goes wrong. With the help of software, they then analyse the material in detail and annotate what is actually happening. This enables them not only to see that something goes wrong, but also to understand why a robot bus halting suddenly is confusing for other road users.
People are often under time pressure to get somewhere, and so a machine suddenly stopping in front of them can be frustrating, even if this is the safest behaviour from a technical perspective. This ‘safe’ behaviour clashes with human drivers and pedestrians,
Hannah says.
Filming self-driving systems in the United States

According to researcher Barry Brown, human driving is shaped by split-second timing, where drivers read subtle cues from other vehicles to anticipate what will happen next—an ability that remains a major challenge for self-driving cars. Photo: Caroline Falkman/SU
Barry uses the same method and has been in the United States to film self-driving systems currently in operation.
If you’ve ever driven in the US, for example in San Francisco, you’ll know the city has lots of hills and lots of four-way intersections. In Europe we more often have roundabouts. At a four-way intersection you have to yield, and it becomes a kind of subtle cooperative dance where the question is who arrived at the intersection first.
But according to Barry, self-driving cars struggle with this. When they are supposed to yield, they drive up to the intersection, stop, and wait—and when they have to decide whether to go or stay, it takes a very long time. When the self-driving cars do start moving again, they sometimes do so at the same time as another car, which results in them stopping again.
For human drivers this can be confusing: What is the self-driving car doing now? Is it going or not?
Four-way intersections are a clear example, but the same problem arises in many other traffic situations where vehicles must yield. For humans, this usually happens smoothly, but for computers this type of interaction is very difficult to manage.
I think one of the challenges is timing. As humans, we want to get where we’re going smoothly, so we’re quite efficient and quick in how we act. We often anticipate other drivers’ movements—maybe by seeing that a car moves a little bit or slows down slightly—and interpret that as a signal of what it is going to do. Based on that, we make our own driving decisions. So, we react very quickly to what other cars do and try to predict their next move.
In human interaction, there is no “time-out mode”

Researcher Hannah Pelikan studies how people interact with self-driving vehicles by filming and documenting from multiple camera angles. Photo: Thor Balkhed, LiU
Hanna argues that a central challenge is being able to read what people are doing and adapt accordingly.
This is a fundamental question we are trying to highlight, especially within human–robot interaction: how should we actually think about interaction? In human interaction, there is no ‘time-out mode.’ Stopping is not always the safe option. Simply showing uncertainty by stopping can in itself create new problems. That’s why it is often more important to be able to see and interpret what people are doing. Are they perhaps making space for the robot or the car to go? If so, the system should be able to accelerate quickly and take that opportunity.
The insights Hannah and Barry capture by analysing the filmed material are then brought into interdisciplinary contexts—especially to robotics researchers—who can in turn translate these insights into design. If researchers with technical backgrounds can understand this and develop algorithms that can handle it, the knowledge can eventually be passed on to industry.
At the same time, Hannah emphasises that developing vehicles and robots is an enormous challenge that requires major efforts.
Moving among people requires an understanding of how people act. That’s why we are also trying, on a more conceptual level, to bring knowledge about human interaction into technical domains. It is a prerequisite for building technology that can coexist with people and move together with them.
Last updated: 2026-03-03
Source: Communications Office