Stockholms universitet

Ronald Van den BergUniversitetslektor, docent

Forskningsprojekt

Publikationer

I urval från Stockholms universitets publikationsdatabas

  • Further perceptions of probability: In defence of associative models

    2023. Mattias Forsgren, Peter Juslin, Ronald Van den Berg. Psychological review

    Artikel

    Extensive research in the behavioral sciences has addressed people’s ability to learn stationary probabilities, which stay constant over time, but only recently have there been attempts to model the cognitive processes whereby people learn—and track—nonstationary probabilities. In this context, the old debate on whether learning occurs by the gradual formation of associations or by occasional shifts between hypotheses representing beliefs about distal states of the world has resurfaced. Gallistel et al. (2014) pitched the two theories against each other in a nonstationary probability learning task. They concluded that various qualitative patterns in their data were incompatible with trial-by-trial associative learning and could only be explained by a hypothesis-testing model. Here, we contest that claim and demonstrate that it was premature. First, we argue that their experimental paradigm consisted of two distinct tasks: probability tracking (an estimation task) and change detection (a decision-making task). Next, we present a model that uses the (associative) delta learning rule for the probability tracking task and bounded evidence accumulation for the change detection task. We find that this combination of two highly established theories accounts well for all qualitative phenomena and outperforms the alternative model proposed by Gallistel et al. (2014) in a quantitative model comparison. In the spirit of cumulative science, we conclude that current experimental data on human learning of nonstationary probabilities can be explained as a combination of associative learning and bounded evidence accumulation and does not require a new model.

    Läs mer om Further perceptions of probability
  • No effect of monetary reward in a visual working memory task

    2023. Ronald Van den Berg (et al.). PLOS ONE 18 (1)

    Artikel

    Previous work has shown that humans distribute their visual working memory (VWM) resources flexibly across items: the higher the importance of an item, the better it is remembered. A related, but much less studied question is whether people also have control over the totalamount of VWM resource allocated to a task. Here, we approach this question by testing whether increasing monetary incentives results in better overall VWM performance. In three experiments, subjects performed a delayed-estimation task on the Amazon Turk platform. In the first two experiments, four groups of subjects received a bonus payment based on their performance, with the maximum bonus ranging from $0 to $10 between groups. We found no effect of the amount of bonus on intrinsic motivation or on VWM performance in either experiment. In the third experiment, reward was manipulated on a trial-by-trial basis using a within-subjects design. Again, no evidence was found that VWM performance depended on the magnitude of potential reward. These results suggest that encoding quality in visual working memory is insensitive to monetary reward, which has implications for resource-rational theories of VWM.

    Läs mer om No effect of monetary reward in a visual working memory task
  • On the generality and cognitive basis of base-rate neglect

    2022. Elina Stengård (et al.). Cognition 226

    Artikel

    Base rate neglect refers to people's apparent tendency to underweight or even ignore base rate information when estimating posterior probabilities for events, such as the probability that a person with a positive cancer-test outcome actually does have cancer. While often replicated, almost all evidence for the phenomenon comes from studies that used problems with extremely low base rates, high hit rates, and low false alarm rates. It is currently unclear whether the effect generalizes to reasoning problems outside this “corner” of the entire problem space. Another limitation of previous studies is that they have focused on describing empirical patterns of the effect at the group level and not so much on the underlying strategies and individual differences. Here, we address these two limitations by testing participants on a broader problem space and modeling their responses at a single-participant level. We find that the empirical patterns that have served as evidence for base-rate neglect generalize to a larger problem space, albeit with large individual differences in the extent with which participants “neglect” base rates. In particular, we find a bi-modal distribution consisting of one group of participants who almost entirely ignore the base rate and another group who almost entirely account for it. This heterogeneity is reflected in the cognitive modeling results: participants in the former group were best captured by a linear-additive model, while participants in the latter group were best captured by a Bayesian model. We find little evidence for heuristic models. Altogether, these results suggest that the effect known as “base-rate neglect” generalizes to a large set of reasoning problems, but varies largely across participants and may need a reinterpretation in terms of the underlying cognitive mechanisms. 

    Läs mer om On the generality and cognitive basis of base-rate neglect
  • How Deep Is Your Bayesianism?-Peeling the Layers of the Intuitive Bayesian

    2022. Elina Stengård, Peter Juslin, Ronald van den Berg. Decision

    Artikel

    Studies in perception have found that humans often behave in accordance with Bayesian principles, while studies in higher-level cognition tend to find the opposite. A key methodological difference is that perceptual studies typically focus on whether people weight sensory cues according to their precision (determined by sensory noise levels), while studies with cognitive tasks concentrate on explicit inverse inference from likelihoods to posteriors. Here, we investigate if lay-people spontaneously engage in precision weighting in three cognitive inference tasks that require combining prior information with new data. We peel the layers of the “intuitive Bayesian” by categorizing participants into four categories: (a) No appreciation for the need to consider both prior and data; (b) Consideration of both prior and data; (c) Appreciation of the need to weight the prior and data according to their precision; (d) Ability to explicitly distinguish the inverse probabilities and perform inferences from description (rather than experience). The results suggest that with a lenient coding criterion, 58% of the participants appreciated the need to consider both the prior and data, 25% appreciated the need to weight them with their precision, but only 12% correctly solved the tasks that required understanding of inverse probabilities. Hence, while many participants weigh the data against priors, as in perceptual studies, they seem to have difficulty with “unpacking” symbols into their real-world extensions, like frequencies and sample sizes, and understanding inverse probability. Regardless of other task differences, people thus have larger difficulty with aspects of Bayesian performance typically probed in “cognitive studies.” 

    Läs mer om How Deep Is Your Bayesianism?-Peeling the Layers of the Intuitive Bayesian

Visa alla publikationer av Ronald Van den Berg vid Stockholms universitet