Roderick Little, Department of Biostatistics, University of Michigan, MI, USA
Roderick Little is a professor of biostatistics at the University of Michigan.
Some recent developments
in the analysis of data with missing values
Missing data are a common problem in psychometric research. Methods for handling this problem are briefly reviewed, including (a) pros and cons of different forms of likelihood inference, specifically maximum likelihood, Bayes and multiple imputation; (b) penalized spline of propensity models for robust estimation under the missing at random assumption, and comparisons with other doubly-robust approaches; and (c) subsample ignorable likelihood methods for regression with missing values of covariates. I’ll also discuss two aspects of a recent National Research Council study on the treatment of missing data in clinical trials, namely how missing data impacts the choice of estimand, and sensitivity analysis for assessing departures from assumptions of the primary analysis.
Jeroen Vermunt, Department of Methodology and Statistics, Tilburg University, The Netherlands
Jeroen K. Vermunt received his PhD degree in social sciences research methods from Tilburg University in the Netherlands in 1996. He is currently a full professor in the Department of Methodology and Statistics at Tilburg University, where he has been on the faculty since 1992. In 2005, he received the Leo Goodman early career award from the methodology section of the American Sociological Association. His research interests include latent class and finite mixture models, IRT modeling, longitudinal and event history data analysis, multilevel analysis, and generalized latent variable modeling. He is the co-developer (with Jay Magidson) of the Latent GOLD software package.
Simplifying the use of latent class analysis
by means of stepwise modeling approaches
I will give an overview of recent and ongoing work of my group on various types of stepwise modeling approaches for LC analysis. Most of this work is part of a large research project funded by the Netherlands Science Foundation. It includes research on three types of very promising approaches: 1. The use of measures similar to modification indices for model fit assessment and model adjustment in simple and complex latent class analysis 2. Bias adjusted three-step latent class analysis for studying the relationship between class membership and external variables 3. Divisive latent class analysis for the construction of latent class trees, yielding an approach similar to hierarchical cluster analysis.
Xiao-Li Meng, Department of Statistics, Harvard University, MA, USA
Xiao-Li Meng, Dean of the Harvard University Graduate School of Arts and Sciences (GSAS), Whipple V. N. Jones Professor and former chair of Statistics at Harvard, is well known for his depth and breadth in research, his innovation and passion in pedagogy, and his vision and effectiveness in administration, as well as for his engaging and entertaining style as a speaker and writer. Meng has received numerous awards and honors for the more than 150 publications he has authored in at least a dozen theoretical and methodological areas, as well as in areas of pedagogy and professional development; he has delivered more than 400 research presentations and public speeches on these topics, and he is the author of “The XL-Files," a regularly appearing column in the IMS (Institute of Mathematical Statistics) Bulletin. His interests range from the theoretical foundations of statistical inferences (e.g., the interplay among Bayesian, frequentist, and fiducial perspectives; quantify ignorance via invariance principles; multi-phase and multi-resolution inferences) to statistical methods and computation (e.g., posterior predictive p-value; EM algorithm; Markov chain Monte Carlo; bridge and path sampling) to applications in natural, social, and medical sciences and engineering (e.g., complex statistical modeling in astronomy and astrophysics, assessing disparity in mental health services, and quantifying statistical information in genetic studies). Meng received his BS in mathematics from Fudan University in 1982 and his PhD in statistics from Harvard in 1990. He was on the faculty of the University of Chicago from 1991 to 2001 before returning to Harvard as Professor of Statistics, where he was appointed department chair in 2004 and the Whipple V. N. Jones Professor in 2007. He was appointed GSAS Dean on August 15, 2012.
Bayesian, Fiducial, and Frequentist (BFF): Best Friends Forever?
Among paradigms of statistical inferences, Bayesian and Frequentist are most popular, with Fiducial approach being the most controversial. However, there is essentially only one scientifically acceptable way of evaluating any inference method: show me how it performs across replications. And hence the great debate in statistics: Which replications best help us predict real world uncertainty? This unified mode of evaluation provides a prism to reveal the whole spectrum of probabilistic inference foundations. In the familiar Data-Model space, the standard Frequentist’s replications fix the Model at the unknown “true” model and let the Data replicate, whereas the Bayesian goes to the other extreme by fixing the Data at the observed and letting the Model vary. The Frequentist thus pays the price of relevance: a method which works on average may not be relevant for the data at hand. In contrast, the Bayesian pays the price of robustness: results are sensitive to prior assumptions about how the Model varies. Fiducial represents one of many possible compromises one can obtain by sliding a ruler along the relevance-robustness spectrum, but it suffers from an incoherent treatment of the Data. Realizing that the differences in inference amount to different choices of replications and there is no one size fits all, Bayes, Fiducial and Frequentism can all thrive under one roof as BFFs (Best Friends Forever) --- only united can we combat the Big Data tsunami. (This talk is based on Liu, K. and Meng, X.-L. (2016). "There is individualized treatment. Why not individualized inference?". Annual Review of Statistics and Its Application, to appear. Available from email@example.com.)
Wim J. van der Linden, Pacific Metrics Corporation, Monterey, CA, USA
2016 Psychometric Society Career Award
Dr. van der Linden is Distinguished Scientist and Director of Research Innovation, Pacific Metrics Corporation, Monterey, CA, and Professor Emeritus of Measurement and Data Analysis, University of Twente. He received his PhD in psychometrics from the University of Amsterdam. His research interests include item response theory, adaptive testing, optimal test assembly, parameter linking, statistical detection of cheating and response time modeling. He is the author of Linear Models for Optimal Test Design (Springer, 2005) and the editor of a new three-volume Handbook of Item Response Theory: Models, Statistical Tools, and Applications (Chapman & Hall/CRC, 2015). He is also a co-editor of Computerized Adaptive Testing: Theory and Applications (Kluwer, 2000; with C. A. W. Glas), and its sequel Elements of Adaptive Testing (Springer, 2010; with C. A. W. Glas). Dr. van der Linden has served on the editorial boards of nearly all major test-theory journals and is co-editor for the Chapman & Hal//CRC Series on Statistics for Social and Behavioral Sciences. He is also a former President of the National Council on Measurement in Education (NCME) and the Psychometric Society, Fellow of the Center for Advanced Study in the Behavioral Sciences, Stanford, CA, was awarded an Honorary Doctorate from Umea University in Sweden in 2008, and is a recipient of the ATP and NCME Career Achievement Awards for his work on educational measurement.
Big Data, Small Steps
David Magis, Department of Education, University of Liège, Belgium
2016 Psychometric Society Early Career Award
David Magis is Research Associate of the “Fonds de la Recherche Scientifique – FNRS” at the Department of Education, University of Liège, Belgium. His specialization is statistical methods in psychometrics, with special interest in item response theory, differential item functioning and computerized adaptive testing. His research interests include both theoretical and methodological development as well as open source implementation and dissemination with the statistical software R. He is currently associate editor of the British Journal of Mathematical and Statistical Psychology and published numerous research papers in various psychometric journals. He is the main developer and maintainer of the packages catR and mstR, among others.
Open Source Programming: A New Hope for Psychometric Research
Current psychometric research is most often supported by computer software. New research perspectives often imply intensive simulation studies to validate the tested theories or hypotheses, and therefore require accurate, fast and stable implementation. To this regards, open source programming (such as in the R language) is a promising approach allowing for flexible implementation, data generation, replication of studies, and worldwide dissemination. The purpose of this talk is to illustrate how psychometrics and open source programming (with special emphasis on the R language) can interact and contribute to each other, by means of some selected examples. Several topics will be illustrated, among others: why open source programming is (to my opinion) as important as psychometric research; why we need for stable and complete implementation of psychometric and statistical routines for research purposes (for e.g., CAT); how accurate implementation of IRT routines can lead to unexpected theoretical results; why (and how) open source software can be valued as research output. Most examples will arise from the CAT framework and the R package catR for simulating CAT patterns.