When AI learns to talk - what it reveals about us
How does language emerge - and can a robot really understand what we say? We spoke to Anna Jon-And, a researcher at the Centre for Cultural Evolution at Stockholm University, who uses AI to understand how grammar can emerge spontaneously - and what it says about our own intelligence.
AI reveals how language is created
One of the great mysteries of linguistics and philosophy has been the question of how words get their meaning. In the 1990s, researcher Luc Steels at the VUB AI Lab in Brussels showed that self-orientated AI systems can develop linguistic conventions through simple interactions. In the so-called ‘naming game’, digital agents or robots were asked to interact by naming objects. One agent said a word, another pointed to an object, and over time stable linguistic conventions emerged. The experiment showed how language can self-organise in a population - an insight that had previously been difficult to prove.

- In our current research, we build on these insights by also testing how grammar can emerge through simple learning mechanisms. We believe that grammar evolves to make language easier to learn because we humans have limits on how much information we can hold and process. We also see that if we create an AI with limited memory, it will spontaneously find grammatical patterns when fed text and rewarded for recognising sentences, says Anna Jon-And.
AI has also been used to reconstruct protolanguages through machine learning, giving researchers new tools to understand language evolution throughout history.

Can AI surprise scientists?
Many people were surprised when the first large language models, such as ChatGPT, were launched and showed an unexpected language capability.
- It was fascinating to see how well they could generate language even though they don't seem to understand it in a human way, says Anna Jon-And.
Although large language models can generate text that appears intelligent, there are still clear limitations. A simple example is asking ChatGPT to create a sentence with exactly eleven words. Sometimes it succeeds, but often the sentence is too short or too long. This shows that large language models do not symbolise and understand numerical concepts, which is very simple for humans.
Misunderstandings that can have consequences
Large language models' lack of what we call understanding can have practical consequences, both in research and in everyday life:
- Large language models often seem to “understand” what you ask them, but this can be an illusion, says Anna Jon-And.
This becomes clear when large language models are given tasks that require precise planning or spatial ability. Research studies have tested the ability of large language models to plan routes in a space. On the surface, they seem to be up to the task, but as complexity increases, they fail. This reminds us that AI's ability to generate language does not mean it understands the world in the same way we do. This is an important insight both for AI researchers and for anyone using AI in everyday life. Another common misconception is that AI must be able to think or have free will to pose a risk to humans:
- But AI can affect us in ways we have no control over, without thinking like we do. For example, my colleague Fredrik Jansson, who researches the societal consequences of AI at the Centre for Cultural Evolution, highlights how AI controls what information we see in everyday life - something that can shape our opinions without us even noticing, says Anna Jon-And.
What does this mean for the future?
AI will continue to play a central role in research on human cognition and language. At the same time, the research reminds us that AI's capabilities and limitations need to be understood in order to use the technology safely and effectively.
- AI offers us new ways to study language and intelligence, but we need to distinguish between different types of AI. Large language models can produce language that is similar to that of humans, but the way they function differs from humans in many ways. Instead, when we want to understand how humans learn languages, we build simpler AI models with few features. These models help us identify the basic mechanisms required for language learning - something that is impossible to discern in complex language models with millions of parameters. Therefore, our minimalist models are more useful for understanding human cognition and learning, says Anna Jon-And.
With AI as a tool, researchers today can explore old questions in new ways - and perhaps even discover entirely new ways of thinking about what intelligence really is.
Last updated: April 16, 2025
Source: The Faculty of Humanities