Research Framework
My methodological research program operates across three core dimensions:
Interpretable Causal Machine Learning: While causal machine learning offers unprecedented flexibility, it carries the inherent risk of “fooling ourselves” when searching for heterogeneous treatment effects. My research addresses this by developing a principled approach to nonparametric regression trees and similar tools—one that explicitly honors the boundary between exploratory and confirmatory research. By making these complex algorithms accessible and interpretable, I design methods that put the researcher firmly in the driver’s seat, ensuring that data-driven discovery is always balanced by strict statistical control.
Moving Beyond the Average Effect: Rather than settling for the population Average Treatment Effect (ATE), my work focuses on uncovering treatment variation. I build tools for estimating granular, cluster-specific, and individual heterogeneous treatment effects (HTE) to accurately identify for whom and under what conditions an intervention works.
Translation to Applied Evaluation: I ground my work in real-world policy evaluation and applications. I am dedicated to building accessible workflows that prevent evaluations from masking heterogeneity, ultimately aiding in the development of cost-effective, scalable educational programs.
These three pillars culminate in my dissertation work: the Discover, Characterize, Confirm (DCC) framework. The DCC is a comprehensive methodological pipeline designed to translate complex causal machine learning algorithms into epistemologically aware, visually interpretable analyses for applied researchers.
Below is an interactive visualization of the research areas relevant to my research program and how they are connected to one another. Clicking on one of the topics brings up my relevant work.
Back to top