Paul De Boeck, The Ohio State University

All models are wrong, but some model violations are useful

Keynote Speaker

2021 Career Award for Lifetime Achievement

Date & Time: Thursday, July 22 at 12:00pm (Noon) EST

Measurement models focus on what needs to be measured and model violations are potential measurement distorters. Switching perspectives, the violations are a potential source of information about aspects of underlying processes that would otherwise remain unnoticed, and the violations may also have consequences for how to treat resulting measurements, even when the violations are considered as minor. I will discuss three cases of measurement model violations. Two refer to residual dependencies and the third is possibly of that type as well. Example 1. Ability and speed are two different dimensions of test responses. A closer look shows that there are item-level dependencies between response time and accuracy which may inform us about the response process and the unresolved issue of power versus speed in cognitive ability tests. Example 2. Using fMRI to measure brain activation shows that brain activation and ability are related, but a closer look again shows that there are residual item-level dependencies, which can inform us about cognitive processes. I will make the more general claim that under certain conditions item-level dependencies can be generalized to within-person dependencies across items. Example 3. Because most measurement models have only approximate fit, it seems reasonable to conjecture that most psychological measures are approximate and most likely confounded in variable ways. Unfortunately, the approximate nature is ignored when investigating relationships between variables. With a small simulation study, I will show that this leads to underestimated uncertainty and to p-values that are too small. Ignoring the approximate nature of measures may also have led to the continuing belief that all null hypotheses are false (the so-called crud factor). These are just three examples, there are more. Taking a closer look at model violations and their consequences, even in the presence of a reasonable goodness of fit, can be useful for other purposes than improving model fit.

About the Speaker

Paul De Boeck is professor of quantitative psychology at the Ohio State University in Columbus OH. Previously he has been affiliated with the KU Leuven in Belgium and the University of Amsterdam in the Netherlands. He was trained in the mathematical psychology program at the KU Leuven directed by Luc Delbeke, with a master thesis on psychophysics and a doctoral dissertation on multidimensional scaling of personality test items and processes underlying the item responses. Before he was appointed as a professor of psychological assessment at the KU Leuven, his research was focused on classification and on quantitative methods for idiographic approaches. He was the first editor of the Applied Research and Case Studies section of Psychometrika, he was president of the Psychometric Society in 2008, and co-chair for the NCME meeting in 2014. His major psychometric research interest is the understanding of test data. This explains his focus on explanatory measurement, his interest in model violations, such as differential item functioning and local dependencies, as well as his interest in item response tree models, and in the relationship between response speed and response accuracy in cognitive ability tests. He also published a review article in Psychological Bulletin on how to understand and how to deal with the replication crisis in psychology.

Log in