Abstract:
Although incorporating causal concepts into deep learning shows promise for increasing explainability, fairness, and robustness, existing methods require unrealistic assumptions or aim to recover the full latent causal model. This talk proposes an alternative: domain counterfactuals. Domain counterfactuals ask a more concrete question: “What would a sample look like if it had been generated in a different domain (or environment)?” This avoids the challenges of full causal recovery while answering an important causal query.
I will first showcase the potential of domain counterfactuals for distribution shift explanations, counterfactual fairness, and domain generalization. Then, I will theoretically analyze the domain counterfactual problem for invertible causal models and prove an estimation bound that depends on the sparsity of intervention, i.e., the number of intervened causal variables. Leveraging this theory, I will introduce a practical VAE-based counterfactual estimation algorithm. Finally, I will connect this work to my broader research focus on distribution matching, highlighting its potential as a foundational tool for building trustworthy machine learning systems.
I will first showcase the potential of domain counterfactuals for distribution shift explanations, counterfactual fairness, and domain generalization. Then, I will theoretically analyze the domain counterfactual problem for invertible causal models and prove an estimation bound that depends on the sparsity of intervention, i.e., the number of intervened causal variables. Leveraging this theory, I will introduce a practical VAE-based counterfactual estimation algorithm. Finally, I will connect this work to my broader research focus on distribution matching, highlighting its potential as a foundational tool for building trustworthy machine learning systems.
Bio:
Prof. David I. Inouye is an assistant professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. His lab focuses on trustworthy machine learning, e.g., causality-inspired ML, explaining distribution shifts, distribution matching for trustworthy ML, fault-tolerant distributed inference, and federated domain generalization. His research is funded by ARL, ONR, and NSF. Previously, he was a postdoc at Carnegie Mellon University working with Prof. Pradeep Ravikumar. He completed his Computer Science PhD at The University of Texas at Austin in 2017 advised by Prof. Inderjit Dhillon and Prof. Pradeep Ravikumar. He was awarded the NSF Graduate Research Fellowship (NSF GRFP).
September 4, 2024
12:30 pm (1h)
Discovery Building, Orchard View Room
David I. Inouye, Purdue University