Profiles

Petri Laukka, porträtt. Foto: Niklas Björling.

Petri Laukka

Universitetslektor

View page in English
Arbetar vid Psykologiska institutionen
Telefon 08-16 39 35
E-post petri.laukka@psychology.su.se
Besöksadress Frescati hagväg 14
Rum 133
Postadress Psykologiska institutionen 106 91 Stockholm

Publikationer

I urval från Stockholms universitets publikationsdatabas
  • 2016. Diana S. Cortes, Petri Laukka, Håkan Fischer. Program of SANS 2016, 58-58

    People constantly evaluate faces to obtain social information. However, the link between aging and social evaluation of faces is not well understood. Todorov and colleagues introduced a data-driven model defined by valence and dominance as the two main components underlying social judgments of faces. They also created a stimulus set consisting of computer-generated faces which systematically vary along various social dimensions (e.g., Todorov et al., 2013, Emotion, 13, 724-38). We utilized a selection of these facial stimuli to investigate age-related differences in judgments of the following dimensions: attractiveness, competence, dominance, extraversion, likeability, threat, and trustworthiness. Participants rated how well the faces represented the intended social dimensions on 9-point scales ranging from not at all to extremely well. Results from 71 younger (YA; mean age = 23.42 years) and 60 older adults (OA; mean age = 69.19 years) showed that OA evaluated untrustworthy faces as more trustworthy, dislikeable faces as more likeable, and unattractive faces as more attractive compared to YA. OA also evaluated attractive faces as more attractive compared to YA, whereas YA did rate likeable and trustworthy faces as more likeable and trustworthy than did OA. In summary, our findings showed that OA evaluated negative social features less negatively compared to YA. This suggests that older and younger persons may use different cues for social evaluation of faces, and is in line with prior research suggesting age-related decline in the ability to recognize negative emotion expressions.

  • 2016. Daniel Feingold (et al.). Psychiatry Research 240, 60-65

    Studies have shown that persons with schizophrenia have lower accuracy in emotion recognition compared to persons without schizophrenia. However, the impact of the complexity level of the stimuli or the modality of presentation has not been extensively addressed. Forty three persons with a diagnosis of schizophrenia and 43 healthy controls, matched for age and gender, were administered tests assessing emotion recognition from stimuli with low and high levels of complexity presented via visual, auditory and semantic channels. For both groups, recognition rates were higher for high-complexity stimuli compared to low-complexity stimuli. Additionally, both groups obtained higher recognition rates for visual and semantic stimuli than for auditory stimuli, but persons with schizophrenia obtained lower accuracy than persons in the control group for all presentation modalities. Persons diagnosed with schizophrenia did not present a level of complexity specific deficit or modality-specific deficit compared to healthy controls. Results suggest that emotion recognition deficits in schizophrenia are beyond level of complexity of stimuli and modality, and present a global difficulty in cognitive functioning.

  • 2016. Anjali Bhatara (et al.). PLoS ONE 11 (6)

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.

  • 2016. Sara Karlsson (et al.). Social Cognitive & Affective Neuroscience 11 (6), 877-883

    The ability to recognize the identity of faces and voices is essential for social relationships. Although the heritability of social memory is high, knowledge about the contributing genes is sparse. Since sex differences and rodent studies support an influence of estrogens and androgens on social memory, polymorphisms in the estrogen and androgen receptor genes (ESR1, ESR2, AR) are candidates for this trait. Recognition of faces and vocal sounds, separately and combined, was investigated in 490 subjects, genotyped for 10 single nucleotide polymorphisms (SNPs) in ESR1, four in ESR2 and one in the AR. Four of the associations survived correction for multiple testing: women carrying rare alleles of the three ESR2 SNPs, rs928554, rs1271572 and rs1256030, in linkage disequilibrium with each other, displayed superior face recognition compared with non-carriers. Furthermore, the uncommon genotype of the ESR1 SNP rs2504063 was associated with better recognition of identity through vocal sounds, also specifically in women. This study demonstrates evidence for associations in women between face recognition and variation in ESR2, and recognition of identity through vocal sounds and variation in ESR1. These results suggest that estrogen receptors may regulate social memory function in humans, in line with what has previously been established in mice.

  • 2016. Florian Eyben (et al.). IEEE Transactions on Affective Computing 7 (2), 190-202

    Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.

  • 2016. Petri Laukka (et al.). Journal of Personality and Social Psychology 111 (5), 686-705

    This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one’s own cultural group—known as in-group advantage—results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory’s prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception.

  • 2016. J.B.C. Holding (et al.). Abstracts of the 23rd Congress of the European Sleep Research Society, 13–16 September 2016, Bologna, Italy. Journal of Sleep Research, 152-152

    Previous studies have highlighted a deficit in facial emotion recognition after sleep loss. However, while some studies suggest an overall deficit in ability, others have only found effects in individual emotions, or no effect at all. The aim of this study was to investigate this relationship in a large sample and to utilise a dynamic test of emotion recognition in multiple modalities. 145 individuals (91 female, ages 18–45) participated in a sleep-deprivation experiment. Participants were randomised into: one night of total sleep deprivation (TSD) or normal sleep (8–9 h in bed). The following day participants completed a computerised emotional recognition test, consisting of 72 visual, audio, and audio-visual clips, representing 12 different emotions. The stimuli were divided into “easy” and “hard” depending on the intensity of emotional display. A mixed ANOVA revealed significant main effects of modality and difficulty, P < 0.001, but no main effect of condition, P = 0.31, on emotional recognition accuracy. Additionally, there was no interaction between condition and difficulty, P = 0.96, or modality, P = 0.67. This study indicates that sleep deprivation does not reduce the ability to recognise emotions. Given that some studies have only found effects on single emotions, it is possible that the effects of sleep loss are more specific than investigated here. However, it is also possible that previous findings relate to the types of static stimuli used. The ability to recognise emotions is key to social perception; this study suggests that this ability is resilient to one night of sleep deprivation.

Visa alla publikationer av Petri Laukka vid Stockholms universitet

Senast uppdaterad: 17 augusti 2017

Bokmärk och dela Tipsa