Mats Wirén Photo: C Stensson

Mats Wirén


View page in English
Arbetar vid Institutionen för lingvistik
Telefon 08-16 12 44
Besöksadress Universitetsvägen 10 C, plan 2-3
Rum C 361
Postadress Institutionen för lingvistik 106 91 Stockholm

Om mig

Ämnesföreträdare i datorlingvistik




Computational Linguistics for Language Sciences, LIM014, 7,5 hp [på engelska]
Matematiska metoder för språkvetare, LIN420, 7,5 hp
Independent Project for the Degree of Master, LIM021, 15 hp [på engelska]
Lingvistik – kandidatkurs, LIN601, 15 hp



I urval från Stockholms universitets publikationsdatabas
  • 2017. Mats Wirén, Kristina Nilsson Björkenstam, Robert Östling. Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017), 2203-2207

    Non-verbal cues from speakers, such as eye gaze and hand positions, play an important role in word learning. This is consistent with the notion that for meaning to be reconstructed, acoustic patterns need to be linked to time-synchronous patterns from at least one other modality. In previous studies of a multimodally annotated corpus of parent–child interaction, we have shown that parents interacting with infants at the early word-learning stage (7–9 months) display a large amount of time-synchronous patterns, but that this behaviour tails off with increasing age of the children. Furthermore, we have attempted to quantify the informativeness of the different nonverbal cues, that is, to what extent they actually help to discriminate between different possible referents, and how critical the timing of the cues is. The purpose of this paper is to generalise our earlier model by quantifying informativeness resulting from non-verbal cues occurring both before and after their associated verbal references.

  • 2017. Robert Östling (et al.). Proceedings of the 21st Nordic Conference on Computational Linguistics, 303-308

    We describe the first effort to annotate a signed language with syntactic dependency structure: the Swedish Sign Language portion of the Universal Dependencies treebanks. The visual modality presents some unique challenges in analysis and annotation, such as the possibility of both hands articulating separate signs simultaneously, which has implications for the concept of projectivity in dependency grammars. Our data is sourced from the Swedish Sign Language Corpus, and if used in conjunction these resources contain very richly annotated data: dependency structure and parts of speech, video recordings, signer metadata, and since the whole material is also translated into Swedish the corpus is also a parallel text.

Visa alla publikationer av Mats Wirén vid Stockholms universitet

Senast uppdaterad: 23 oktober 2017

Bokmärk och dela Tipsa