IMPS 2018 Keynote Speakers

 

Robert MislevyRobert Mislevy, Educational Testing Service

Robert J. Mislevy is Frederic M. Lord Chair in Measurement and Statistics at ETS and Emeritus Professor at the University of Maryland. His research applies developments in technology, statistics, and cognitive science to practical problems in assessment. His work includes an evidence-centered framework for assessment design, simulation-based assessment of network engineering with Cisco Systems, and a multiple-imputation approach for modeling the distributions of latent variables in educational surveys. His publications include Sociocognitive foundations of educational measurement, Bayesian psychometric modeling, Bayesian networks in educational assessment, Psychometric considerations for game-based assessment, and the chapter on cognitive psychology and educational assessment in Educational Measurement (4th Edition). He has received career contributions awards from AERA and NCME, and the NCME Award for Technical Contributions to Measurement (three times), served as president of the Psychometric Society, and is a member of the National Academy of Education.

An Urgent Assessment Question and a Proposed Answer, with an Eye toward Bayesian Inference

Suppose we hold a situative, sociocognitive perspective of the nature of human capabilities and how they develop. Suppose we also think that the practices, the concepts, and the modeling tools of educational measurement, which evolved trait and behavioral perspectives, can nevertheless hold value for practical work in educational assessment. How might we conceive of educational measurement models such that the interpretations we hold both of the models and of our practical uses of them are consistent with the sociocognitive perspective?

I propose an articulation through an argument-structured, sociocognitively-framed, constructivist-realist, subjectivist-Bayesian variant of latent-variable measurement modeling. The presentation will parse this admittedly unwieldy phrase. I will call particular attention to the role of measurement models as Bayesian exchangeability structures, and note implications for the interpretation of latent variables, probabilities, and validity.

 

 

Andrew GelmanAndrew Gelman, Columbia University

Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He has received the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. His books include Bayesian Data Analysis (with John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin), Teaching Statistics: A Bag of Tricks (with Deb Nolan), Data Analysis Using Regression and Multilevel/Hierarchical Models (with Jennifer Hill), Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do (with David Park, Boris Shor, and Jeronimo Cortina), and A Quantitative Tour of the Social Sciences (co-edited with Jeronimo Cortina). Andrew has done research on a wide range of topics, including: why it is rational to vote; why campaign polls are so variable when elections are so predictable; why redistricting is good for democracy; reversals of death sentences; police stops in New York City, the statistical challenges of estimating small effects; the probability that your vote will be decisive; seats and votes in Congress; social network structure; arsenic in Bangladesh; radon in your basement; toxicology; medical imaging; and methods in surveys, experimental design, statistical inference, computation, and graphics.

The Statistical Crisis in Science and What We Can Do About It

There is a statistical crisis in science, particularly in psychology where many celebrated findings have failed to replicate, and where careful analysis has revealed that many celebrated research projects were fatally flawed in their design in the sense of never having sufficiently accurate data to answer the questions they were attempting to resolve. The statistical methods which revolutionized science in the 1930s-1950s no longer seem to work in the 21st century. How can this be?

It turns out that when effects are small and highly variable, the classical approach of black-box inference from randomized experiments or observational studies no longer works as advertised. We discuss the conceptual barriers that have allowed researchers to avoid confronting these issues, which arise not just in psychology but also in policy research, public health, and other fields. To do better, we recommend three steps: (a) designing studies based on a perspective of realism rather than gambling or hope, (b) higher quality data collection, and (c) data analysis that combines multiple sources of information.

 

 

David BleiDavid Blei, Columbia University

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference with massive data. He works on a variety of applications, such as text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Presidential Early Career Award for Scientists and Engineers (2011), ACM-Infosys Foundation Award (2013), and a Guggenheim fellowship (2017). He is a fellow of the ACM and the IMS.

Large-scale probabilistic modeling

I describe Shopper, a sequential probabilistic model of market baskets. Shopper uses interpretable components to model the forces that drive how a customer chooses products; it is designed to capture how items interact with other items. I describe an efficient inference algorithm to estimate these forces from large-scale data, and report a study of over five million transactions from a major chain grocery store. We are interested in answering counterfactual queries about changes in prices. We found that Shopper provides accurate predictions even under price interventions, and that it helps identify complementary and substitutable pairs of products.

This is joint work with Fran Ruiz (Columbia) and Susan Athey (Stanford).

 

 

Ken BollenKenneth A. Bollen, University of North Carolina, Chapel Hill, USA
Career Award for Lifetime Achievement

Kenneth A. Bollen is the Henry Rudolph Immerwahr Distinguished Professor in the Department of Psychology and Neuroscience and Department of Sociology at the University of North Carolina at Chapel Hill. He is a faculty member in the Quantitative Psychology Program in the Thurstone Psychometric Laboratory. He also is chair of the Methods Core and a Fellow of the Carolina Population Center. In addition, he chairs the Advisory Committee for the Social, Behavioral, and Economic Sciences Directorate at the National Science Foundation (NSF). Since 1980 he has been an instructor in the ICPSR Summer Program in Quantitative Methods of Social Research at the University of Michigan. His recent research focuses on creating methodological tools applicable to modeling longitudinal and time series data, causal indicator measurement models, and estimators robust to misspecifications. Population and health outcomes provide the substantive contexts for his methodological work. Bollen received his B.A. from Drew University (1973) and an M.A. and Ph.D. from Brown University (1975, 1977). He was a Research Scientist at General Motors Research Labs (1977-1982) and an assistant professor at Dartmouth College (1982-1985) prior to joining UNC-Chapel Hill (1985). The NSF, NIH, USAID, and other organizations have funded his research.

Specify globally, estimate and test locally: A MIIV approach to structural equation models

Structural equation modelers typically specify, identify, estimate, and test the whole model rather than parts of it. This global orientation makes sense when we have no structural misspecifications and when the estimator's distributional assumptions are satisfied. Yet in the real world of approximate models and failures of distributional assumptions, some advantages of a global orientation convert to disadvantages. System wide estimators (e.g., maximum likelihood) can spread bias from structural errors in one part of the system to other parts less plagued by problems. Underidentification can prevent estimation. Global fit measures complicate finding the local sources of errors. The Model Implied Instrumental Variable - Two Stage Least Squares (MIIV-2SLS) approach proposed in Bollen (1996) specifies models globally, but allows identification, estimation, and testing at the equation level. This equation-wise focus has several advantages. First, MIIV-2SLS has greater robustness to structural misspecifications than system-wide estimators. Second, MIIV-2SLS is an asymptotic distribution free estimator. Third, MIIV-2SLS has overidentification tests that apply to single equations and these help to pinpoint problems. Fourth, researchers only need to estimate those equations of interest and not all equations. Fifth, MIIV-2SLS applies to any identified equations even if the model as a whole is underidentified. Sixth, MIIV-2SLS is noniterative, so that convergence is not an issue. This presentation will provide an overview of MIIV-2SLS in SEMs and illustrate MIIVsem, an R package that implements it.

 

 

Cees GlasCees Glas, University of Twente, The Netherlands
Presidential Address

Cees Glas a full professor at the Department of Research Methodology, Measurement and Data Analysis, of the Faculty of Behavioural Science of the University of Twente in the Netherlands. The focus of his work is on estimating and testing of latent variable models in general and of item response theory (IRT) models in particular, and on the application of IRT models in educational measurement and psychological testing. He participated in numerous research projects including projects of the Dutch Ministry of Education (OCW), the Dutch National Institute for Educational Measurement (Cito, the Netherlands), the Law School Admission Council (USA) and the OECD international educational survey PISA. He serves as a member of the technical advisory committees of the OECD projects PISA and PIAAC. He serves as president of the Psychometric Society in 2018. Published articles, book chapters and supervised theses cover such topics as testing of fit of IRT models, Bayesian estimation of multidimensional and multilevel IRT models, modeling with non-ignorable missing data, concurrent modeling of item responses and response times, concurrent modeling of item responses and textual input, and the application of computerized adaptive testing in the context of health assessment and organisational psychology.

Psychometric tools for practical problems in educational measurement

Developments in the field of the psychometric discipline are both motivated by theoretical developments in statistics, and by substantive questions emerging in the social and behavioral sciences. I will discuss a number of substantive issues arising in educational measurement and the psychometric tools to tackle these issues. The first line of research started about thirty-five years ago and when I was confronted with the problem of maintaining the standards of examinations in the Netherlands; applied research that still keeps me occupied today. Solutions to the problem were found in IRT in combination with marginal maximum likelihood. I will discuss a concise and versatile framework for estimating and testing that is based on Fisher’s identity and Lagrange multiplier tests. I will show that this framework applies to a very broad family of IRT models, leads to effortless broad generalizations and pain-free derivations. The second line of research started where the possibilities of the first line ended, that is, when encountered with highly-dimensional models such as multidimensional IRT and multilevel IRT. Solutions to the raised problems were found in the framework of Bayesian statistics combined with Markov chain Monte Carlo (MCMC) computational methods. Problems motivating this research emerged, for instance, in experiments where observers rated teacher behavior on several dimensions. Response data can be modeled with a combination of IRT and generalizability theory. Computations can be made with general purpose samplers, or with dedicated samplers with data-augmentation. Model parameters can be either estimated concurrently or in a two-step procedure, where the parameters of the measurement model are estimated first and then imputed in the more comprehensive model. The latter approach combines the strong elements of the two discussed research lines.

 

 

Jingchen LiuJingchen Liu, Columbia University, New York, USA
Early Career Award

Jingchen Liu is an Associate Professor in the Department of Statistics at Columbia University. He holds a Ph.D. in Statistics from Harvard University. He is the recipient of 2013 Tweedie New Researcher Award given by the Institute of Mathematical Statistics and a recipient of the 2009 Best Publication in Applied Probability Award given by the INFORMS Applied Probability Society. He has research interests in statistics, psychometrics, applied probability, and Monte Carlo methods. He currently serves on the editorial boards of Psychometrika, British Journal of Mathematical and Statistical Psychology, Journal of Applied Probability/Advances in Applied Probability, Extremes, Operations Research Letters, and STAT.

An Exploration of Latent Structure in Process Data

In classic tests, item responses are often expressed as univariate categorical variables. Computer-based tests allow us to track students’ entire problem solving processes through log files. In this case, the response to each item is a time-stamped process containing students’ detailed behaviors and their interaction with the simulated environment. The key questions are whether and how much more information are contained in the detailed response processes additional to the traditional responses (yes/no or partial credits). Furthermore, we also need to develop methods to systematically extract such information from the process responses that often contain a substantial amount of noise. In this talk, I present exploratory analysis of process data in PIAAC and PISA. The results confirm that the process data do contain a significant amount of additional information and they also provide guideline on efficient item design for future studies.

 

 

Sacha EpskampSacha Epskamp, University of Amsterdam, The Netherlands
Dissertation Prize

Sacha Epskamp is an assistant professor at the University of Amsterdam, department of Psychological Methods, and research fellow at the Institute for Advanced studies of the University of Amsterdam. In 2017, Sacha Epskamp completed is PhD on network psychometrics – estimating network models from psychological datasets and equating these to established psychometric modeling techniques. He has implemented these methods in several software packages now routinely used in diverse fields of psychological research. Sacha Epskamp teaches on multivariate statistics and data science, and his research interests involve reproducibility, complexity, time-series modeling, and dynamical systems modeling. In addition to the psychometric society dissertation prize, Sacha Epskamp has received several rewards for his research, including the Leamer-Rosenthal Price for Open Science (2016).

Network Psychometrics: Current State and Future Directions

The novel field of network psychometrics focuses on the estimation of network models aiming to capture interactions between observed variables. In this presentation, I will introduce this field and its main recent advances, and I will discuss future directions and challenges the field has yet to face. First, I will discuss the estimation of network from datasets ranging from data with independent cases (e.g., cross-sectional data) to datasets of multiple time-series. Second, I will discuss the formalization of network models as formal psychometric models, which allows for their combination with the general frameworks of structural equation modeling and item-response theory. I will discuss model equivalences between network and factor models and generalizations of network models that encompass latent variable structures. Finally, I will discuss future directions in network psychometrics such as the handling of missing data, ordinal data, network-based adaptive assessment and forming network models using theoretical knowledge.