Invited Speakers

Ed MerkleEdgar Merkle, Psychological Sciences, University of Missouri, Columbia, MO, USA

Ed Merkle is an Associate Professor in the Quantitative Psychology program at the University of Missouri. His psychometric research has focused on the topics of model comparison, measurement invariance, and Bayesian methods. His research also includes the modeling and evaluation of probabilistic forecasts, most recently in the context of a geopolitical forecasting tournament funded by IARPA. Much of the methodology developed in his research is freely available via R packages, including blavaan for Bayesian SEM (with Yves Rosseel), nonnest2 for non-nested model comparison (with Dongjun You), and scoring for families of proper scoring rules.

Bayesian SEM: Some computational advances and potential pitfalls

The literature on structural equation models has been slow to incorporate the Markov chain Monte Carlo advances of the 1980s and 1990s, as compared to the literature on other classes of statistical models. While a variety of methods exist (most notably, the methods available in Mplus) for specific types of SEMs with specific types of prior distributions, it remains difficult to estimate many models using general (non-conjugate) prior distributions and to utilize modern Bayesian metrics. In this talk, I will discuss and illustrate some approaches to Bayesian SEM estimation that allow us to harness open source MCMC samplers (i.e., JAGS) and software (i.e., lavaan) so that it is relatively easy to specify, estimate, and extend Bayesian structural equation models. I will also discuss some difficulties associated with Bayesian SEM that have not received much attention in the literature.

 

Ellen HamakerEllen Hamaker, Department of Methods and Statistics, Utrecht University, The Netherlands

Ellen Hamaker is Associate Professor at Methodology and Statistics, Faculty of Social Sciences, Utrecht University. Her research focuses on longitudinal data analysis, and covers N=1 time series analysis, dynamic multilevel modeling, and longitudinal structural equation modeling. A recurrent theme in her work is the importance of separating within-person processes from between-person differences, and that merely studying between-person variation (using cross-sectional or panel data), provides little—if any—insight in the dynamics of processes that take place at the level of the individual.

At the frontiers of dynamic multilevel modeling

Due to technological developments (e.g., smartphones), there is an enormous increase in studies based on daily diaries, ecological momentary assessments, ambulatory assessments, and experience sampling methods. The intensive longitudinal data resulting from these studies provide us with the unique opportunity to investigate the dynamics of psychological processes as they are unfolding over time. This has led to a new class of statistical models, which we may call dynamic multilevel modeling: Such models are based on using time series models at level 1 to capture the dynamics of the within-person process, while at level 2 we allow for between-person differences in the parameters of these processes. In this presentation I will give an introduction into this new area: We consider applications of univariate and multivariate multilevel autoregressive models (which includes dynamical networks). In addition, I discuss some of the specific challenges associated with this kind of statistical modeling.

 

Carolin StroblCarolin Strobl, Department of Psychology, University of Zurich, Switzerland

Carolin Strobl  is a professor of Psychological Methods, Evaluation and Statistics at the UZH Department of Psychology, where she teaches various introductory and advanced statistics classes. She is head of the statistical consulting unit at the Department of Psychology, organizer of the Zurich R Courses continuing education program and core team member of the Psychometric Computing workshop series. Her research focuses on machine learning and psychometrics as well as the combination of both approaches. Together with her workgroup and collaborators she has contributed to a few R packages, including the package psychotree for recursive partitioning of psychometric models.

Model-based recursive partitioning of psychometric models: A data-driven approach for detecting heterogeneity in model parameters

Model-based recursive partitioning is a flexible framework for detecting differences in model parameters between two or more groups of subjects. Its origins lie in machine learning, where its predecessor methods, classification and regression trees, had been introduced around the 1980s as a nonparametric regression technique. Today, after the statistical flaws of the early algorithms have been overcome, their extension to detecting heterogeneity in parametric models makes recursive partitioning methods a valuable addition to the statistical “toolbox” in various areas of application, including econometrics and psychometrics. This talk gives an overview about the rationale and statistical background of model-based recursive partitioning in general and in particular with extensions to psychometric models for paired comparisons as well as item response models. In this context, the data-driven approach of model-based recursive partitioning proves to be particularly suited for detecting violations of homogeneity or invariance, such as differential item functioning, where we usually have no a priori hypotheses about the underlying group structure.

 

Steve ReiseSteven P. Reise, Department of Psychology, UCLA, CA, US

Dr. Reise received his Ph.D. in Psychometrics from the Department of Psychology at the University of Minnesota in 1990. He is presently full professor at UCLA in Quantitative Psychology. His research primarily focuses on the application of item response theory (IRT) models to personality, psychopathology and health outcomes measures. Current interests center on the role of the bifactor model in defining and assessing dimensionality in an IRT context, and new approaches to modeling psychopathology constructs that cannot be assumed to be continuous and normally distributed. Dr. Reise has had a long standing interest in outlier detection (i.e., person-fit measurement) in IRT, and more recently in structural equation modeling (SEM). He is currently working on applications of robust SEM estimation methods where outliers are down-weighted during the estimation of model parameters. Dr. Reise also maintains an active line of research in psychiatry, where he is part of a consortium to develop new methods of defining and measuring psychopathology and understanding its genetic/biological foundations.

Is the Bifactor Model a Better Model or is it Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale 

Although the structure of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask: (1) for what percentage of individuals is a unidimensional model adequate, (2) what additional percentage of individuals can be modeled with multidimensional specifications, and (3) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS; Yuan & Bentler, 2000) to examine the structure of the RSES in a large, publicly available dataset. Two distance measures for determining case weights were used: (1) ?xml:namespace prefix = "v" ns = "urn:schemas-microsoft-com:vml" /  reflecting a distance between a response pattern and an estimated model, and (2)  reflecting a distance based on individual residuals. We found  to be superior to for producing a robust factor pattern and  to be more sensitive to, and diagnostic of, improvements in model adequacy as more complex models are fit. A bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. But, based on values, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as “unmodelable” due to their significant residuals in all models considered. Finally, analysis of  revealed that some, but not all, of the superior fit of the bifactor model is owed to that model’s ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording.

 

Michael CheungMike Cheung, Department of Psychology, National University of Singapore, Singapore

Mike W.-L. Cheung  is an Associate Professor at the Department of Psychology, and an Associate Professor (by courtesy) at the Department of Management & Organisation, National University of Singapore. His research interests are in the areas of meta-analysis, structural equation modeling, and multilevel modeling. He recently published a book titled Meta-Analysis: A Structural Equation Modeling Approach (Wiley, 2015) with the associated metaSEM package in R. He is co-editing a book series titled SpringerBriefs in Research Synthesis and Meta-Analysis published by Springer and a special issue on “meta-analytic structural equation modeling” that will appear in Research Synthesis Methods.

Integrating meta-analysis within Structural Equation Modeling: Theories and applications

Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This presentation gives an overview on how popular meta-analytic models, such as univariate, multivariate and three-level meta-analyses, can be integrated under the SEM framework. I will also discuss meta-analytic structural equation modeling (MASEM) that can be used to synthesize findings in SEM.

 

Ric LuechtRic Luecht, School of Education, The University of North Carolina at Greensboro, NC, USA

Ric Luecht is a Professor of Educational Research Methodology at the UNC-Greensboro where he teaches graduate courses in applied statistics and advanced measurement. His research includes technology integration in assessment, advanced psychometric modeling and estimation, and the application of engineering design principles for formative assessment (i.e., assessment engineering). He has designed numerous algorithms and software programs for automated test assembly and devised a computerized adaptive multistage testing framework used by a number of large-scale testing programs. Dr. Luecht is also a technical consultant and advisor for many state department of education testing agencies and large-scale testing organizations.

Engineering Design in the Assessment World:  A New Paradigm for Test Design and Development, with Psychometric Implications

Psychometric theory and modeling technologies have made enormous contributions to the measurement field.  However, we still seem to lack a coherent framework for reconciling our rather sophisticated psychometric models and scaling methods with test design and development practices. The latter requires serious consideration of domain-specific content and cognitive attributes in designing and writing items, and also impact the test assembly process.  Assessment engineering (AE) is a relatively new framework comprised of five building blocks for concretely integrating content-focused item and test development practices with psychometric modeling and analysis.  The AE building blocks borrow strong design and quality control principles from industrial engineering to provide robust and scalable assessment solutions.  The ultimate goal is to create an efficient manufacturing  system for producing valid, reliable, low-cost, high-quality items and performance tasks that can be adapted for use in assessment settings ranging from formative assessments in classrooms to high-stakes certification/licensure tests (e.g. potentially generating massive top-quality item banks that provide on-target measures of multiple traits or attributes classes, and that do NOT require pretesting).  This presentation will discuss some of the critical issues leading to the development of AE, will lay out the five building blocks comprising the framework, and provide some real-life examples of AE-related research to date.