Profiles

Petri Laukka, porträtt. Foto: Niklas Björling.

Petri Laukka

Universitetslektor

Visa sidan på svenska
Works at Department of Psychology
Telephone 08-16 39 35
Email petri.laukka@psychology.su.se
Visiting address Frescati hagväg 14
Room 133
Postal address Psykologiska institutionen 106 91 Stockholm

Publications

A selection from Stockholm University publication database
  • 2018. Patrik N. Juslin, Petri Laukka, Tanja Banziger. Journal of nonverbal behavior 42 (1), 1-40

    It has been the subject of much debate in the study of vocal expression of emotions whether posed expressions (e.g., actor portrayals) are different from spontaneous expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed expressions across 3 experiments. Results showed that (a) spontaneous expressions were generally rated as more genuinely emotional than were posed expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed expressions, supporting a dose-response relationship between intensity of expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of vocal expression are discussed.

  • 2018. Daniel Hovey (et al.). Social Cognitive & Affective Neuroscience 13 (2), 173-181

    The ability to correctly understand the emotional expression of another person is essential for social relationships and appears to be a partly inherited trait. The neuropeptides oxytocin and vasopressin have been shown to influence this ability as well as face processing in humans. Here, recognition of the emotional content of faces and voices, separately and combined, was investigated in 492 subjects, genotyped for 25 single nucleotide polymorphisms (SNPs) in eight genes encoding proteins important for oxytocin and vasopressin neurotransmission. The SNP rs4778599 in the gene encoding aryl hydrocarbon receptor nuclear translocator 2 (ARNT2), a transcription factor that participates in the development of hypothalamic oxytocin and vasopressin neurons, showed an association that survived correction for multiple testing with emotion recognition of audio–visual stimuli in women (n = 309). This study demonstrates evidence for an association that further expands previous findings of oxytocin and vasopressin involvement in emotion recognition.

  • 2017. Henrik Nordström (et al.). Royal Society Open Science 4 (11)

    This study explored the perception of emotion appraisal dimensions on the basis of speech prosody in a cross-cultural setting. Professional actors from Australia and India vocally portrayed different emotions (anger, fear, happiness, pride, relief, sadness, serenity and shame) by enacting emotion-eliciting situations. In a balanced design, participants from Australia and India then inferred aspects of the emotion-eliciting situation from the vocal expressions, described in terms of appraisal dimensions (novelty, intrinsic pleasantness, goal conduciveness, urgency, power and norm compatibility). Bayesian analyses showed that the perceived appraisal profiles for the vocally expressed emotions were generally consistent with predictions based on appraisal theories. Few group differences emerged, which suggests that the perceived appraisal profiles are largely universal. However, some differences between Australian and Indian participants were also evident, mainly for ratings of norm compatibility. The appraisal ratings were further correlated with a variety of acoustic measures in exploratory analyses, and inspection of the acoustic profiles suggested similarity across groups. In summary, results showed that listeners may infer several aspects of emotion-eliciting situations from the non-verbal aspects of a speaker's voice. These appraisal inferences also seem to be relatively independent of the cultural background of the listener and the speaker.

  • 2017. Benjamin C. Holding (et al.). Sleep 40 (11)

    Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task.

    Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization.

    Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1).

    Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.

  • 2016. Petri Laukka (et al.). Journal of Personality and Social Psychology 111 (5), 686-705

    This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one’s own cultural group—known as in-group advantage—results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory’s prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception.

  • 2016. Florian Eyben (et al.). IEEE Transactions on Affective Computing 7 (2), 190-202

    Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.

  • 2015. Teruo Yamasaki, Keiko Yamada, Petri Laukka. Psychology of Music 43 (1), 61-74

    Questionnaire and interview studies suggest that music is valued for its role in managing the listener’s impression of the environment, but systematic investigations on the topic are scarce. We present a field experiment wherein participants were asked to rate their impression of four different environments (a quiet residential area, traveling by train in the suburbs, at a busy crossroads, and in a tranquil park area) on bipolar adjective scales, while listening to music (which varied regarding level of perceived activation and valence) or in silence. Results showed that the evaluation of the environment was in general affected in the direction of the characteristics of the music, especially in conditions where the perceived characteristics of the music and environment were incongruent. For example, highly active music increased the activation ratings of environments which were perceived as inactive without music, whereas inactive music decreased the activation ratings of environments which were perceived as highly active without music. Also, highly positive music increased the positivity ratings of the environments. In sum, the findings suggest that music may function as a prism that modifies the impression of one’s surroundings. Different theoretical explanations of the results are discussed.

  • 2015. Joshua T. Kantrowitz (et al.). Journal of Neuroscience 35 (44), 14909-14921

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal (“prosodic”) features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention.

  • 2014. Petri Laukka, Daniel Neiberg, Hillary Anger Elfenbein. Emotion 14 (3), 445-449

    The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within- versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.

  • 2014. J. T. Kantrowitz (et al.). Psychological Medicine 44 (13), 2739-2748

    Background. Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results. Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.

  • 2013. Petri Laukka (et al.). Emotion 13 (3), 434-449

    We present a cross-cultural study on the performance and perception of affective expression in music. Professional bowed-string musicians from different musical traditions (Swedish folk music, Hindustani classical music, Japanese traditional music, and Western classical music) were instructed to perform short pieces of music to convey 11 emotions and related states to listeners. All musical stimuli were judged by Swedish, Indian, and Japanese participants in a balanced design, and a variety of acoustic and musical cues were extracted. Results first showed that the musicians' expressive intentions could be recognized with accuracy above chance both within and across musical cultures, but communication was, in general, more accurate for culturally familiar versus unfamiliar music, and for basic emotions versus nonbasic affective states. We further used a lens-model approach to describe the relations between the strategies that musicians use to convey various expressions and listeners' perceptions of the affective content of the music. Many acoustic and musical cues were similarly correlated with both the musicians' expressive intentions and the listeners' affective judgments across musical cultures, but the match between musicians' and listeners' uses of cues was better in within-cultural versus cross-cultural conditions. We conclude that affective expression in music may depend on a combination of universal and culture-specific factors.

  • 2013. Petri Laukka, Lina Quick. Psychology of Music 41 (2), 198-215

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both quantitative items that assessed the prevalence of various uses of music, and open-ended items that targeted specific emotional episodes in relation to music in sports. Results showed that the athletes most often reported listening to music during pre-event preparations, warm-up, and training sessions; and the most common motives for listening to music were to increase pre-event activation, positive affect, motivation, performance levels and to experience flow. The athletes further reported that they mainly experienced positive affective states (e.g., happiness, alertness, confidence, relaxation) in relation to music in sports, and also reported on their beliefs about the causes of the musical emotion episodes in sports. In general, the results suggest that the athletes used music in purposeful ways in order to facilitate their training and performance.

Show all publications by Petri Laukka at Stockholm University

Last updated: December 7, 2018

Bookmark and share Tell a friend