I will discuss a line of recent work using causal models to understand algorithmic fairness. Rather than attempting to make minimal assumptions and provide robust inferences, this approach uses strong assumptions for the sake of interpretability, transparency, and falsifiability. Although the application focus is on fairness, causal models can be applied in similar ways toward achieving other values or objectives in responsible machine learning or data-driven decisions more broadly. I will conclude by discussing some of the hard challenges in fair machine learning that connect to measurement issues and problems in psychometrics.
about the speaker
Joshua Loftus is an Assistant Professor of Statistics at the London School of Economics. Joshua’s research goals are to improve practices in data science and machine learning to reduce the impact of bias, particularly biases associated with social harms and scientific reproducibility. He is also broadly interested in high-dimensional statistics and causal inference, and in teaching theory, applications, and best practices in ethical data science. Before joining LSE Joshua earned his PhD in Statistics at Stanford University, was a Research Fellow at the Alan Turing Institute affiliated with the University of Cambridge, and then was an Assistant Professor at New York University from 2017-2020.