Invited Speakers

Li CaiLi Cai, University of California, Los Angeles, CA, USA

Li Cai is a Professor in the Advanced Quantitative Methodology program in the UCLA Graduate School of Education and Information Studies, where he also serves as Co-Director of the National Center for Research on Evaluation, Standards, and Student Testing (CRESST), along with colleagues Eva Baker and Bob Linn. In addition, he is affiliated with the UCLA Department of Psychology in the quantitative area, where he also teaches and trains students. His research agenda involves the development, integration, and evaluation of innovative latent variable models that have wide-ranging applications in educational, psychological, and health-related domains of study. A key component on this agenda is statistical computing, particularly as related to item response theory (IRT) and multilevel modeling which has led to the development of IRTPRO (with David Thissen and Stepen du Toit) and flexMIRT.

The utility of multidimensional item response models in educational assessment and evaluation studies

This presentation is focused on demonstrating the utility of multidimensional (and multilevel) item response models in educational assessment and evaluation settings. The central theme is that the flexibility of an expanded modeling framework enables the specification of models that respond to features of design, rather than forcing studies to fit into molds made up of standard measurement model choices. The first application involves a novel approach to analyze data from a multi-site randomized experimental study of the impact of learning games in middle schools. The second application describes a multidimensional model-based approach to evaluate the statistical properties of Student Growth Percentiles (SGPs), a widely used growth measure in educational evaluation and accountability systems in the US.

 

Jee-Seon KimJee-Seon Kim, University of Wisconsin-Madison, WI, USA

Jee-Seon Kim is Professor of Quantitative Methods in the Educational Psychology Department and an affiliated faculty member in the Center for Health Enhancement Systems Studies, the Interdisciplinary Training Program in the Education Sciences, and the Interdisciplinary Research Training in Speech and Language Disorders at the University of Wisconsin-Madison.  Her research interests focus on multilevel models and other latent variable models, methods for modeling change, learning, and human development using longitudinal data, multiple imputation of missing data, and the implementation of experimental and quasi-experimental designs, including propensity score matching techniques for clustered data.

Causal inference with observational multilevel data: Challenges & Strategies

This talk discusses issues and challenges for causal inference with observational multilevel data and presents strategies to remove selection bias and obtain a consistent estimate of treatment effects. Statistical methods are presented for multilevel propensity score analysis, where treated and untreated units are matched based on their estimated probabilities to receive the treatment. Unlike other multilevel matching methods, homogeneous classes of clusters are identified with respect to the selection process and then units are matched across clusters but within the homogeneous classes. This resulting multilevel latent-class logit model approach overcomes major weaknesses of existing methods and provides a flexible tool for investigating treatment effects when treatment assignment is not random. The strategy is particularly effective for handling different selection processes and/or heterogeneous treatment effects, where there might be different reasons for receiving the treatment and/or the treatment effects might vary for different subgroups in the data.

 

Elizabeth A. StuartElizabeth A. Stuart, Johns Hopkins University, Baltimore, MD, USA

Elizabeth A. Stuart is an Associate Professor in the Department of Mental Health and the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health. She received her Ph.D. in statistics in 2004 from Harvard University, working under the direction of Donald Rubin, and was recently elected a Fellow of the American Statistical Association. Dr. Stuart has extensive experience in methods for estimating causal effects and dealing with the complications of missing data in experimental and non-experimental studies, particularly as applied to mental health, public policy, and education. She also has extensive experience with designing and analyzing randomized experiments, missing data methods, multilevel modeling, and Bayesian methodology. She has served as a consultant on propensity score and missing data methods for numerous researchers and organizations and she teaches semester-long, 1-day, and 1-week short courses on the estimation of causal effects. She has received research funding for her work from the National Science Foundation and the National Institutes of Health and has served on advisory panels for the National Academy of Sciences and the US Department of Education, including serving as Chair of the Patient Centered Outcomes Research Institute's (PCORI's) inaugural Clinical Trials Advisory Panel. 

Propensity score methods in the context of covariate measurement error

Propensity score methods are commonly used to estimate causal effects in non-experimental studies. Existing propensity score methods assume that covariates are measured without error but covariate measurement error is likely common. This talk will discuss the implications of measurement error in the covariates on the estimation of causal effects using propensity score methods and investigates Multiple Imputation using External Calibration (MIEC) to account for covariate measurement error in propensity score estimation. MIEC uses a main study sample and a calibration dataset that includes observations of the true covariate (X) as well as the version measured with error (W). MIEC creates multiple imputations of X in the main study sample, using information on the joint distribution of X, W, other covariates, and the outcome of interest, from both the calibration and the main data. In simulation studies we found that MIEC estimates the treatment effect almost as well as if the true covariate X were available. We also found that the outcome must be used in the imputation process, a finding related to the idea of congeniality in the multiple imputation literature. We illustrate MIEC using an example estimating the effect of neighborhood disadvantage on the mental health of adolescents, where the method accounts for measurement error in the adolescents' report of their mothers' age when they (the adolescents) were born.

 

Francis TuerlinckxFrancis Tuerlinckx, KU Leuven, Belgium

Francis Tuerlinckx is Professor of Quantitative Psychology at the KU Leuven, University of Leuven in Belgium. He received the Master degree in psychology (1996) and a Ph.D. in psychology (2000) from the KU Leuven. He was a postdoc at the Department of Statistics of Columbia University (New York). His research deals with the mathematical modeling of various aspects of human behavior. More specifically, he works on item response theory, reaction time modeling, and dynamical systems modeling of cognition and emotion. He serves as an associate editor for the Journal of Mathematical Psychology.

Bridging psychology and psychometrics: Three case studies

In this talk, I examine through three case studies how mathematical and statistical models from psychometrics (and related fields) may help to solve substantive research problems in psychology. The discussed case studies range from semiparametric models to unravel the structure of affect, to dynamical systems models for understanding aspects of psychopathology, to stochastic process models shedding light on elementary decision making. For each of these case studies, I explain the main findings, the underlying models that were used and the challenges and problems that remain to be solved.

 

Xiangdong YangXiangdong Yang, East China Normal University, Shanghai, China

Xiangdong Yang a professor of education and the associate dean for international affairs of the School of Education Science, East China Normal University (ECNU), Shanghai, China. He is also the associate director of the Institute of Curriculum and Instruction (ICI), a national key research center affiliated with the Ministry of Education (MOE) of China. He is currently serving as a member of the national advisory committee on curriculum and evaluation. Dr. Yang received his Ph.D. in Quantitative Psychology from University of Kansas. USA. Before joining ECNU, he was an Assistant Professor of Inquiry Methodology, Department of Counseling and Educational Psychology at Indiana University, Bloomington, USA. He also served as a Research Associate in the Center for Educational Testing and Evaluation (CETE) at the University of Kansas. In these roles, Dr. Yang conducted research and taught courses in the area of psychometrics, quantitative methods, and research design in education. His research interests mainly include cognitive item design, cognitive diagnostic assessment, item response theory, applied statistics. He has published over 40 articles, monographs and books. His articles appear in peer-reviewed journals including Psychometrika, Educational and Psychological Measurement, as well as books such as Handbook of Statistics: Vol 25: Psychometrics, and Complementary research methods for education (3rd edition).

Toward an explanatory scale of cognitively designed algebra story problems

Cognitive item design entails a theory-driven approach to measurement (Bouwmeester & Sijtsma, 2004; Embretson, 1998, 2000; Lohman & Ippel, 1993). A theory of the to-be-measured construct is more than a simple definition or some abstract principles of item design. It formulates an elaborated description of the substantive nature of the construct, its structural representation in the mechanism of item solution for a given type of tasks, as well as operational models that link cognitive variables to task features and examinee responses. In addition, this approach sheds new light on fundamental issues of measurement, such as construct validity, and provides substantive bases for automatic item design, explanatory measurement scale construction, and cognitive diagnosis. In this talk, I will present results from several studies about principled task analyses and cognitive feature extraction, based upon a cognitive theory of algebra story problem solving, and show that how such results can lead to the construction of an explanatory measurement scale spanning from 2nd to 8th grade and possible schema of generating algebra story problems in a principled fashion. Implications of such results on diagnostic modeling will also be discussed.