About me
I am professor of Mathematical statistics (Statistical science) at Stockholm University. I am formally retired from October 2009, but reemployed parttime since then to give one or two courses per year, and for involvement in palaeoclimate research.
My scientific interests
My interests are in statistical modelling and statistical methods, both theory and applications.
Fields of active research or particular competence are:
 Inferential principles,
 Exponential families, theory and applications, (writing a book)
 Climate models and paleoclimate statistics,
 Latent factor analysis (e.g. article published 2016)
 Biostatistics, in particular for molecular biology
 Chemometrics (regression and multivariate calibration),
 Sampling survey inference (in particular modelbased inference)
 Statistical methods in stereology,
 Use of experimental design
 Applied statistics in statistical consulting. (Annual course)
Courses given recently
During fall 2017 I give as usual my course in Statistical Consulting, involving clients and projects from the departmental consulting service.
Previous academic years I also gave that course. I was also responsible for a study group of Ph.D. students on the topic of sampling theory, spring 2015.
In the period 2007 to 2012 I gave two courses per year on Master/Ph.D. level:
Statistical models, based on my own lecture notes on parametric statistical inference and exponential families, and the course in
Statistical Consulting, see above.
2012 and 2013 I gave part of the Linear models course.
Autumn 2004 I gave a course on Statistical theory for exponential families. Spring term 2005 I taught Linear statistical models and Statistical consulting methodology (this time in English), and led a course on Statistics for microarrays. Spring term 2006 I taught Linear statistical models and Principles of statistical inference (graduate level; based on a book manuscript by David Cox).
Some recent talks
Sept. 2015 I gave a talk at the Past Earth Network (PEN) Conference in Crewe, England, jointly with Anders Moberg (Bolin Centre): Statistical framework for evaluation of climate model simulations by use of climate proxy data.
Dec. 2015 I gave a talk for master students i Statistics, SU: Statistical and other models for palaeoclimate research.
June 2016 I gave a talk for students participating in the Research Academy for Young Scientists (RAYS) workshop, Strängnäs.
December 2016 I gave most part of an intensive 3days course on statistical inference theory for Ph.D students in statistics, with emphasis on exponential family models and on hypothesis testing.
17 May 2017 I gave a talk at my dept: "Shaved dice" inference — Two contrasting points of view of a simple situation.
23 Oct. 2017 I gave an invited talk: "Statistical inference when dimension (much) exceeds sample size", in a conference at KTH on highdimensional data and big data.
Recent Ph.D. students under my supervision,
and their areas of research:
Marie Linder (Ph.D. dissertation 15 Jan 1999):
(Bilinear regression and second order calibration)
Anders Björkström (Licentiate exam 1998, PhD dissertation 28 Sept 2007):
(Generalized ridge regression and other regression methods for nearcollinear data)
Niklas Norén (Licentiate exam 2005, PhD dissertation 7 May 2007):
(Searching in databases for information on side effects of medications; coadvisor Ralph Edwards)
Anna Stoltenberg (Licentiate exam 23 Sept. 2009):
(Statistical analysis of ordered categorical data in pharmaceutical trials; coadvisor Olivier Guilbaud, AstraZeneca)
Jelena Bojarova (Licentiate exam 2004, PhD dissertation 4 June 2010):
(Toward sequential data assimilation for NWP models using Kalman filter tools)
Ekaterina Fetisova (Licentiate exam 2015, PhD dissertation 12 Dec. 2017):
(Statistical modelling in palaeoclimatology; I was coadvisor)
Contact information
Email address: rolfs at math.su.se, or rolfsundberg1942 at telia.com
Older webpage at http://staff.math.su.se/rolfs/ including CV and Publication list with links
I can also be found on Researchgate
Teaching
Academic year 2017–2018 l will give the course on Statistical Consulting.
Research
Research activities 20152017
Palaeoclimate research: Inference about palaeoclimate simulation models by comparison with instrumental and proxy climate data. In particular, I'm one of the authors for a group of three papers published in Climate of the Past, 2012 (2) and 2015 (first author of Part 1  Theory):
Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium — Parts 13 Open access
Paper with Uwe Feldmann (Saarland Univ.) on factor analysis published June 2016 in Journal of Multivariate Analysis, vol. 148, pp 49–59: Exploratory factor analysis – Parameter estimation and scores prediction with highdimensional data. Open access
Exponential families: I'm writing on a monograph for Cambridge University Press: Statistical modelling by exponential families. A much overlapping manuscript exists as Lecture notes for the course Statistical models, last version Nov. 2016. An accepted journal manuscript for The American Statistician also belongs to this area: A note on ''shaved dice'' inference (to appear in The American Statistician 2018).
Here is a paper where my main role was the analysis of designed experiments:
Sara Gummesson, et al: Lithic raw material economy in the mesolithic: An experimental test of edged tool efficiency and durability in bone tool production. Published in Lithic Technology, Vol. 42:4, 2017.
Publications
A selection from Stockholm University publication database
2017. Sara Gummesson (et al.).
The foundation of this paper is lithic economy with a focus on the actual use of different lithic raw materials for tasks at hand. Our specific focus is on the production of bone tools during the Mesolithic. The lithic and osseous assemblages from Strandvägen, Motala, in eastcentral Sweden provide the archaeological background for the study. Based on a series of experiments we evaluate the efficiency and durability of different tool edges of five lithic raw materials: Cambrian flint, Cretaceous flint, mylonitic quartz, quartz, and porphyry, each used to whittle bone. The results show that flint is the most efficient of the raw materials assessed. Thus, a nonlocal raw material offers complements of functional characteristics for bone working compared to locally available quartz and mylonitic quartz. This finding provides a new insight into lithic raw material distribution in the region, specifically for bone tool production on site.

Article Exploratory factor analysisParameter estimation and scores prediction with highdimensional data2016. Rolf Sundberg, Uwe Feldmann. Journal of Multivariate Analysis 148, 4959
In an approach aiming at highdimensional situations, we first introduce a distributionfree approach to parameter estimation in the standard random factor model, that is shown to lead to the same estimating equations as maximum likelihood estimation under normality. The derivation is considerably simpler, and works equally well in the case of more variables than observations (p > n). We next concentrate on the latter case and show results of type: Albeit factor, loadings and specific variances cannot be precisely estimated unless n is large, this is not needed for the factor scores to be precise, but only that p is large; A classical fixed point iteration method can be expected to converge safely and rapidly, provided p is large. A microarray data set, with p = 2000 and n = 22, is used to illustrate this theoretical result.

2015. Anders Moberg (et al.). Climate of the Past 11 (3), 425448
A statistical framework for evaluation of climate model simulations by comparison with climate observations from instrumental and proxy data (part 1 in this series) is improved by the relaxation of two assumptions. This allows autocorrelation in the statistical model for simulated internal climate variability and enables direct comparison of two alternative forced simulations to test whether one fits the observations significantly better than the other. The extended framework is applied to a set of simulations driven with forcings for the preindustrial period 10001849 CE and 15 treeringbased temperature proxy series. Simulations run with only one external forcing (land use, volcanic, smallamplitude solar, or largeamplitude solar) do not significantly capture the variability in the treering data  although the simulation with volcanic forcing does so for some experiment settings. When all forcings are combined (using either the small or largeamplitude solar forcing), including also orbital, greenhousegas and nonvolcanic aerosol forcing, and additionally used to produce small simulation ensembles starting from slightly different initial ocean conditions, the resulting simulations are highly capable of capturing some observed variability. Nevertheless, for some choices in the experiment design, they are not significantly closer to the observations than when unforced simulations are used, due to highly variable results between regions. It is also not possible to tell whether the smallamplitude or largeamplitude solar forcing causes the multipleforcing simulations to be closer to the reconstructed temperature variability. Proxy data from more regions and of more types, or representing larger regions and complementary seasons, are apparently needed for more conclusive results from modeldata comparisons in the last millennium.

2012. Alistair Hind, Anders Moberg, Rolf Sundberg. Climate of the Past 8 (4), 13551365
The statistical framework of Part 1 (Sundberg et al., 2012), for comparing ensemble simulation surface temperature output with temperature proxy and instrumental records, is implemented in a pseudoproxy experiment. A set of previously published millennial forced simulations (Max Planck Institute – COSMOS), including both "low" and "high" solar radiative forcing histories together with other important forcings, was used to define "true" target temperatures as well as pseudoproxy and pseudoinstrumental series. In a global landonly experiment, using annual mean temperatures at a 30yr time resolution with realistic proxy noise levels, it was found that the low and high solar fullforcing simulations could be distinguished. In an additional experiment, where pseudoproxies were created to reflect a current set of proxy locations and noise levels, the low and high solar forcing simulations could only be distinguished when the latter served as targets. To improve detectability of the low solar simulations, increasing the signaltonoise ratio in local temperature proxies was more efficient than increasing the spatial coverage of the proxy network. The experiences gained here will be of guidance when these methods are applied to real proxy and instrumental data, for example when the aim is to distinguish which of the alternative solar forcing histories is most compatible with the observed/reconstructed climate.

2012. Rolf Sundberg, Anders Moberg, Alistair Hind. Climate of the Past 8 (4), 13391353
A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on nearsurface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

2010. Rolf Sundberg. Scandinavian Journal of Statistics 37 (4), 632643
It is well known that curved exponential families can have multimodal likelihoods. We investigate the relationship between flat or multimodal likelihoods and model lack of fit, the latter measured by the score (Rao) test statistic of the curved model as embedded in the corresponding full model. When data yield a locally flat or convex likelihood (root of multiplicity >1, terrace point, saddle point, local minimum), we provide a formula for in such points, or a lower bound for it. The formula is related to the statistical curvature of the model, and it depends on the amount of Fisher information. We use three models as examples, including the BehrensFisher model, to see how a flat likelihood, etc. by itself can indicate a bad fit of the model. The results are related (dual) to classical results by Efron from 1978.

2010. Jelena Bojarova, Rolf Sundberg. Environmetrics 21 (6), 562587
Statistical modelling of six time series of geological ice core chemical data from Greenland is discussed. We decompose the total variation into long timescale (trend) and short timescale variations (fluctuations around the trend), and a pure noise component. Too heavy tails of the shortterm variation makes a standard timeinvariant linear Gaussian model inadequate. We try nonGaussian state space models, which can be efficiently approximated by timedependent Gaussian models. In essence, these timedependent Gaussian models result in a local smoothing, in contrast to the global smoothing provided by the timeinvariant model. To describe the mechanism of this local smoothing, we utilise the concept of a local variance function derived from a heavytailed density. The timedependent error variance expresses the uncertainty about the dynamical development of the model state, and it controls the influence of observations on the estimates of the model state components. The great advantage of the derived timedependent Gaussian model is that the Kalman filter and the Kalman smoother can be used as efficient computational tools for performing the variation decomposition. One of the main objectives of the study is to investigate how the distributional assumption on the model error component of the short timescale variation affects the decomposition.

2008. Rolf Sundberg. Journal of Chemometrics 22, 436440
A Plackett–Burman type dataset from a paper by Williams (1968), with 28 observations and 24 twolevel factors, has become a standard dataset for illustrating construction (by halving) of supersaturated designs (SSDs) and for a corresponding data analysis. The aim here is to point out that for several reasons this is an unfortunate situation. The original paper by Williams contains several errors and misprints. Some are in the design matrix, which will here be reconstructed, but worse is an outlier in the response values, which can be observed when data are plotted against the dominating factor. In addition, the data should better be analysed on logscale than on original scale. The implications of the outlier for SSD analysis are drastic, and it will be concluded that the data should be used for this purpose only if the outlier is properly treated (omitted or modified).

2008. G. Niklas Norén (et al.). Statistics in Medicine 27 (16), 30573070
Interaction between drug substances may yield excessive risk of adverse drug reactions (ADRs) when two drugs are taken in combination. Collections of individual case safety reports (ICSRs) related to suspected ADR incidents in clinical practice have proven to be very useful in postmarketing surveillance for pairwise drug–ADR associations, but have yet to reach their full potential for drug–drug interaction surveillance. In this paper, we implement and evaluate a shrinkage observedtoexpected ratio for exploratory analysis of suspected drug–drug interaction in ICSR data, based on comparison with an additive risk model. We argue that the limited success of previously proposed methods for drug–drug interaction detection based on ICSR data may be due to an underlying assumption that the absence of interaction is equivalent to having multiplicative risk factors. We provide empirical examples of established drug–drug interaction highlighted with our proposed approach that go undetected with logistic regression. A database wide screen for suspected drug–drug interaction in the entire WHO database is carried out to demonstrate the feasibility of the proposed approach. As always in the analysis of ICSRs, the clinical validity of hypotheses raised with the proposed method must be further reviewed and evaluated by subject matter experts.

2008. Petra von Stein, JanOlov Persson, Rolf Sundberg. Gastroenterology 134 (7), 18691881