Stockholm university

Robert Mikael ÖstlingDocent

About me

Researcher in Computational Linguistics

My research falls into a few broad categories. First, general applications in natural language processing (NLP), mostly with a multilingual perspective. Examples include part-of-speech tagging, machine translation and word alignment. In computational typology, I use these and other NLP tools to learn more about the overall structures of languages at a global scale. I also use NLP tools to automate psycholinguistic investigations.

I am the principal investigator for the Swedish Research Council funded project "Structured multilinguality for natural language processing", which aims to explore improved representations of highly multilingual NLP systems. A link to a separate webpage about the project will be published here in due time.

Research projects

Publications

A selection from Stockholm University publication database

  • What Do Language Representations Really Represent?

    2019. Johannes Bjerva (et al.). Computational linguistics - Association for Computational Linguistics (Print) 45 (2), 381-389

    Article

    A neural language model trained on a text corpus can be used to induce distributed representations of words, such that similar words end up with similar representations. If the corpus is multilingual, the same model can be used to learn distributed representations of languages, such that similar languages end up with similar representations. We show that this holds even when the multilingual corpus has been translated into English, by picking up the faint signal left by the source languages. However, just as it is a thorny problem to separate semantic from syntactic similarity in word representations, it is not obvious what type of similarity is captured by language representations. We investigate correlations and causal relationships between language representations learned from translations on one hand, and genetic, geographical, and several levels of structural similarity between languages on the other. Of these, structural similarity is found to correlate most strongly with language representation similarity, whereas genetic relationships—a convenient benchmark used for evaluation in previous work—appears to be a confounding factor. Apart from implications about translation effects, we see this more generally as a case where NLP and linguistic typology can interact and benefit one another.

    Read more about What Do Language Representations Really Represent?
  • Visual Iconicity Across Sign Languages

    2018. Robert Östling, Carl Börstell, Servane Courtaux. Frontiers in Psychology 9

    Article

    We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.

    Read more about Visual Iconicity Across Sign Languages
  • Continuous multilinguality with language vectors

    2017. Robert Östling, Jörg Tiedemann. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 644-649

    Conference

    Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.

    Read more about Continuous multilinguality with language vectors
  • The Helsinki Neural Machine Translation System

    2017. Robert Östling (et al.). Proceedings of the Conference on Machine Translation (WMT), 338-347

    Conference

    We introduce the Helsinki Neural Machine Translation system (HNMT) and how it is applied in the news translation task at WMT 2017, where it ranked first in both the human and automatic evaluations for English–Finnish. We discuss the successof English–Finnish translations and the overall advantage of NMT over a strong SMT baseline. We also discuss our sub-missions for English–Latvian, English–Chinese and Chinese–English.

    Read more about The Helsinki Neural Machine Translation System

Show all publications by Robert Mikael Östling at Stockholm University