Counterfactual Identifiability of Bijective Causal Models


Arash Nasr-Esfahany1      Mohammad Alizadeh1      Devavrat Shah2,3     

1Computer Science and Artificial Intelligence Laboratory (MIT CSAIL)
2Institute for Data, Systems and Society (MIT IDSS)
3Laboratory for Information and Decision Systems (MIT LIDS)

Abstract


We study counterfactual identifiability in causal models with bijective generation mechanisms (BGM), a class that generalizes several widely-used causal models in the literature. We establish their counterfactual identifiability for three common causal structures with unobserved confounding, and propose a practical learning method that casts learning a BGM as structured generative modeling. Learned BGMs enable efficient counterfactual estimation and can be obtained using a variety of deep conditional generative models. We evaluate our techniques in a visual task and demonstrate its application in a real-world video streaming simulation task.


Paper


Counterfactual Identifiability of Bijective Causal Models
Arash Nasr-Esfahany, Mohammad Alizadeh, Devavrat Shah
Fortieth International Conference on Machine Learning (ICML '23)
[PDF]


Code


[GitHub]


Slides


[Slides]


Talk


[Video]


Supporters


This work was supported by NSF grant 1751009, a gift from the CSAIL SystemsThatLearn (STL) Initiative, and Google, Intel, and Amazon as part of the MIT Data Systems and AI (DSAIL) lab.