Don Rubin, Department of Statistics, Harvard University, MA, USA
Don Rubin is John L. Loeb Professor of Statistics, Harvard University, where he has been professor since 1983, and Department Chair for 13 of those years. He has been elected to be a fellow/Member/Honorary Member of: the Woodrow Wilson Society, John Simon Guggenheim Memorial Foundation, Alexander von Humbolt Foundation, American Statistical Association, Institute of Mathematical Statistics, International Statistical Institute, American Association for the Advancement of Science, American Academy of Arts and Sciences, European Association of Methodology, British Academy, and the U.S. National Academy of Sciences. He has authored/coauthored over 400 publications (including ten books), has four joint patents, and has made important contributions to statistical theory and methodology, particularly in causal inference, design and analysis of experiments and sample surveys, treatment of missing data, and Bayesian data analysis. Among his other awards and honors, Professor Rubin has received the Samuel S. Wilks Medal from the American Statistical Association, the Parzen Prize for Statistical Innovation, the Fisher Lectureship and the George W. Snedecor Award of the Committee of Presidents of Statistical Societies. He was named Statistician of the Year, American Statistical Association, Boston and Chicago Chapters. He has served on the editorial boards of many journals, including: Journal of Educational Statistics, Journal of the American Statistical Association, Biometrika, Survey Methodology, and Statistica Sinica. Professor Rubin has been, for many years, one of the most highly cited authors in mathematics in the world (ISI Science Watch), as well as in economics (Highly Cited Economists), and other social sciences, such as psychology, with over 200,000 total citations, (according to Google Scholar). For many decades he has given keynote lectures and short courses in the Americas, Europe, and Asia. He has also received honorary doctorate degrees from Otto Friedrich University, Bamberg Germany, the University of Ljubljana, Ljubljana, Slovenia, and University Santo Thomas, Bogota, Columbia. He also has received several honorary professorships.
Disentangling Active Treatment Effects from Placebo Effects in Randomized Double-blind Trials
When approving pharmaceutical drugs (for example by the US FDA or the EU EMA) for general sale, it is common to rely on double-blind randomized trials, where the active drug is compared to an inactive placebo. The logic is that the seller of the drug should not get the drug approved and make money selling it if the drug has no effect beyond that of an inactive placebo. This position is predicated on an ethical argument, suggesting that even if the originator of the drug is the first to market some clever idea that capitalizes on a placebo effect, the originator should not profit from that idea. Despite this position, once a drug is approved and patients get their doctors’ prescriptions filled for this new drug (which patients have been told has been shown to work in patients with similar conditions), some patients taking the drug will experience the actual treatment effect of the active drug as well as the placebo effect. Disentangling these effects, the medical treatment effect from the psychological effect of being told the drug should help them, is a difficult but important practical problem, similar in some ways to the problem of estimating the causal effect of an active drug when there is non-compliance with assignment to take or not to take the drug. There is now a substantial literature on the problem of non-compliance, a substantial thread following Angrist, Imbens & Rubin (1996, J American Statistical Association), which showed how the econometric idea of “instrumental variables (IV)” could be applied to address the non-compliance problem using the “Rubin Causal Model” to formulate causal inference problems in terms of “potential outcomes”. A generalization of that approach, “Principal Stratification" (Frangakis & Rubin, 2002, Biometrics), can be used to disentangle treatment effects from placebo effects under explicitly stated assumptions. Some basic statistical theory will be presented and then applied to some real data where placebo effects are expected to be substantial, at least for some subset of patients.
Peter Bühlmann, ETH Zürich, Switzerland
Peter Bühlmann is Professor of Mathematics and Statistics, and currently Chair of the department of mathematics at ETH Zurich. He studied mathematics at ETH Zurich and received his doctoral degree in 1993 from the same institution. After spending three years at UC Berkeley from 1994-1997, he returned to ETH Zurich. His main research interests are in high-dimensional and computational statistics, machine learning, causal inference and applications in the bio-medical field.
Heterogeneous large-scale data: new opportunities for causal inference and prediction
Perhaps unexpectedly, heterogeneity in potentially large-scale or "big" data can be beneficially exploited for causal inference and more robust prediction. The key idea relies on invariance and stability across different heterogeneous regimes or groups. The resulting new procedures make use of regression analysis as a main tool and offer (possibly conservative) confidence guarantees. We will discuss the novel methodology as well as applications in biology and economics.
Ulf Böckenholt, Kellogg School of Management, Northwestern University, IL, USA
Ulf Böckenholt is the John D. Gray Professor of Marketing at the Kellogg School of Management. He has published widely in marketing, psychology, economics, and statistics journals. His main research interest is in the development and application of statistical and psychometric methods for understanding consumer behavior and improving marketing decision-making. Areas of recent research include the modeling of response biases in self-reports, consumer choice processes for medical and financial decisions, affective and motivational influences in judgment and choice, (mist)trust-induced information-processing mechanisms and the effects of imagery on persuasion processes. Currently, Ulf Böckenholt serves as Associate Editor of Psychometrika, Behaviormetrika, and the Journal of Behavioral and Educational Statistics as well as on the Editorial Review Board of the Journal of Marketing Research. He is a Past Editor of Psychometrika, a Past President of the Psychometric Society, and a Fellow of the Association of Psychological Science. He received his PhD from the University of Chicago.
Item-response models for the analysis of self-reports
Many policy decisions rely on the information extracted from self-reports obtained in surveys or non-cognitive assessment tests. The importance of the self-report tool in the social sciences explains why, ever since it emerged, researchers and practitioners have advanced theories about psychological processes involved in responding to questionnaire items. However, there is a substantial gap between cognitive theories explaining the response process and current psychometric models utilized to analyze self-report data. This talk highlights this gap by focusing on the modeling of such well-established phenomena as response styles, item-reversal effects, halo effects, and self-enhancing/-protective responding. I present item-response models that build on cognitive theories and inform about the adaptive and goal-directed nature of the item-response process. Both for estimation and validation purposes, I also discuss the use of supplemental information in the form of eye-tracking and response-time data. From a modeling perspective, the presented item-response models join a growing literature on psychometric approaches that, by explicitly accounting for different types of response processes, go beyond the classic view of random sources of measurement error.
Willem Heiser, Faculty of Social and Behavioral Sciences and Mathematical Institute of the Faculty of Science, Leiden University, the Netherlands
Career Award for Lifetime Achievement
Willem J. Heiser (Rotterdam, 1949) currently is professor of Data Theory at the Mathematical Institute of Leiden University, the Netherlands. He studied psychology and completed his PhD thesis entitled “Unfolding Analysis of Proximity Data” in 1981, also at Leiden University. He then spent a post-doc year at Bell Telephone Labs in Murray Hill, NJ, upon invitation of J. Douglas Carroll. After his return to Leiden for the rest of his career, his research focused on the analysis of multivariate categorical data using multidimensional scaling and classification techniques. As a member of the Gifi-group, he contributed software to the Categories package for nonlinear multivariate data analysis, distributed by IBM-SPSS®. His academic responsibilities in Leiden included being vice-dean of research, head of department of psychology, and member of the university integrity committee. Visiting professorships included universities in Rennes (France), Granada (Spain), Exeter (United Kingdom), and Naples (Italy). He also was Scientific Director of the Dutch-Belgian Interuniversity Graduate School for Psychometrics and Sociometrics (IOPS). He was elected President of the Psychometric Society in 2003-2004, was appointed Editor of Psychometrika from 1995 to 1999 and Editor of the Journal of Classification from 2003 to 2015.
Early Psychometric Contributions to Gaussian Graphical Modelling: A Tribute to Louis Guttman
Graphical models and network analysis are increasingly being used in several areas of psychology and cognitive science, such as neuroimaging, psychopathology, and cognitive development. These models are also brought to the attention of psychometricians: a recent Psychometrika paper speaks of network psychometrics. Present-day authors using Gaussian graphical modelling refer to authors citing other authors who in turn credit Arthur Dempster’s 1972 Biometrics paper. Closer scrutiny shows that the key ingredient leading to Gaussian graphical models is the inverse of the correlation matrix including its statistical interpretation as the partial association between two variables given all others. It was exactly this concept that was central in Louis Guttman’s 1953 Psychometrika paper about image theory, once quite influential in psychometrics. Several elements of this story might still be relevant for us today.
Chun Wang, Department of Psychology, University of Minnesota, USA
Early Career Award
Chun Wang received her PhD in Quantitative Psychology from the University of Illinois at Urbana-Champaign in 2012. Upon graduation, she joined the University of Minnesota as an assistant professor of Quantitative/Psychometric Methods in the Department of Psychology. Her research focus is broadly situated in the field of psychological and educational measurement, with specific devotion to methodology advancement that leads to better assessment with higher reliability/fidelity, fairness, and security. Her passion is the improvement of methods for measuring a wide range of psychological and educational variables, as well as developing, refining, and extending methods for analyzing multivariate data that are widely used in the behavioral sciences. The first thrust of her work is centered on resolving challenges emerged from the wide-ranging implementation of computerized adaptive testing (CAT) that is built on modern item response theory. The second line of her research agenda has been focused on developing models/methods to better understand nonlinear relationships among observed and latent variables using state-of-art latent variable methods, including multidimensional and/or multilevel item response theory models, cognitive diagnostic models, semi-parametric models, and mixture models. She have received several scholarly awards, including the NCME Alicia Cascallar Award (2013), the NCME Jason Millman Promising Measurement Scholar Award (2014), the IACAT Early Career Researcher Award (2014), and the AERA Division D Early Career Award (2015).
Methods for Resolving Measurement Error Challenges in a Two-Stage Approach
A rapidly developing outcome-based culture among policymakers in education recognizes the need to use standardized test scores, such as item response theory (IRT) scaled θ scores, along with other outcome measures, to make high-stakes decisions. However, there exist potential errors in estimating the latent θ scores (and other outcome measures), and ignoring the measurement errors will adversely bias the subsequent statistical inferences. With the growing computational power nowadays, a recommended approach to address the measurement error challenge is to use an integrated multilevel IRT model. Despite the statistical appeal of the one stage approach, we advocate that a “divide and-conquer” two-stage approach has its practical advantage. In the two-stage approach, an appropriate measurement model is first fitted to the data, and the resulting θ scores (or its distributions) are used in subsequent analysis. Three different methods are introduced within the two-stage framework that actively take the measurement error into consideration. They are compared with the integrated modeling approach via simulation studies. Results have shown that little precision is lost when the new methods are used. Practical guidelines, future studies and potential challenges are discussed in the end. (This work is in collaboration with Dr. Gongjun Xu from the University of Michigan and graduate student Xue Zhang.)
Maria Bolsinova, Utrecht University, Utrecht and Cito, Arnhem, the Netherlands
Maria Bolsinova studied psychology at Moscow State University, Russia and methodology and statistics in Leiden University, the Netherlands. In 2011, Maria started working as a PhD candidate at the Department of Methodology and Statistics of Utrecht University and at the Psychometric Research Centre at Cito, Dutch National Institute for Educational Measurement under supervision of professors Gunter Maris and Herbert Hoijtink. Next to doing her dissertation research, she worked as a test expert on a variety of projects at Cito, was involved in statistical consultations to students and researchers, and in teaching undergraduate, graduate and post-graduate courses in statistics at Utrecht University. In 2014, she went on a research visit to Ohio State University, where she worked on models for response time and accuracy together with prof. Paul de Boeck. She defended her dissertation cum laude in May 2016. At present Maria is a post-doctoral researcher in Psychological Methods at the Department of Psychology at the University of Amsterdam working on item response theory, response time modelling and Bayesian statistics.
Making the most out of response times: Moving beyond traditional assumptions
With the increasing popularity of computerised testing, in many applications of educational and cognitive testing not only the accuracy of the response is recorded, but response time as well. This additional information provides a more complex picture of the response processes. The two most important reasons to consider response times are: 1) to increase precision of the estimation of ability by using response times as collateral information; 2) to gain further insight in the underlying response processes. The hierarchical modelling framework for response times and accuracy (van der Linden, 2007) which arguably has become the standard approach for jointly modeling response time and accuracy in educational measurement provides a clear structure for studying both response times and accuracy, but is based on a set of assumptions which may not match the complex picture that arises when realistic response processes are considered. In this presentation, I will consider models that move beyond the simple structure assumption and the assumption of conditional independence between response times and accuracy and consider their added value from a statistical and substantive point of view.
Anders Skrondal, Norwegian Institute of Public Health & University of Oslo, Norway and UC Berkeley, CA, USA
Anders Skrondal is a Senior Scientist at the Norwegian Institute of Public Health and Adjunct Professor at the University of Oslo and the University of California, Berkeley. He was previously a Professor of Statistics and Director of the Methodology Institute at the London School of Economics. Skrondal's research interests are in generalized latent variable modeling, which unifies item response, latent class, structural equation, multilevel, and longitudinal modeling. He has published many books, including Generalized Latent Variable Modelling, Chapman & Hall, 2004 (with Rabe-Hesketh), translated into Chinese in 2010, and The Cambridge Dictionary of Statistics, Cambridge University Press, 2010 (with Everitt). He has published papers in more than 50 different peer-reviewed journals, such as Psychometrika, Journal of Educational and Behavioral Statistics, Journal of Econometrics, Biometrika, Biometrics, and Journal of the Royal Statistical Society. Skrondal served as co-editor of Statistical Methods in Medical Research, is an elected member of the International Statistical Institute, and a former elected member of the Research Section Committee of the Royal Statistical Society. He was awarded the 1997 Psychometric Society Dissertation Prize.
The role of conditional likelihoods in latent variable modeling
When applicable, constructing a conditional likelihood is one way of handling incidental parameters (whose numbers increases in tandem with the observations) in statistical models. In measurement, the use of conditional likelihoods has a long history associated with the Rasch model. In this talk I will argue that conditional likelihoods (and their approximations) may have an even more important role to play in more general latent variable models. In particular, such «fixed-effects» approaches can allow protective estimation under common challenges such as (1) unobserved confounding, (2) heteroscedasticity, (3) endogenous sampling, (4) cluster-size dependence, and (5) missing data. I will also discuss some limitations of conditional likelihood estimation.