Evaluating Interventional Reasoning Capabilities of Large Language Models


Journal article


Tejas Kasetty, Divyat Mahajan, G. K. Dziugaite, Alexandre Drouin, Dhanya Sridhar
arXiv.org, 2024

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Kasetty, T., Mahajan, D., Dziugaite, G. K., Drouin, A., & Sridhar, D. (2024). Evaluating Interventional Reasoning Capabilities of Large Language Models. ArXiv.org.


Chicago/Turabian   Click to copy
Kasetty, Tejas, Divyat Mahajan, G. K. Dziugaite, Alexandre Drouin, and Dhanya Sridhar. “Evaluating Interventional Reasoning Capabilities of Large Language Models.” arXiv.org (2024).


MLA   Click to copy
Kasetty, Tejas, et al. “Evaluating Interventional Reasoning Capabilities of Large Language Models.” ArXiv.org, 2024.


BibTeX   Click to copy

@article{tejas2024a,
  title = {Evaluating Interventional Reasoning Capabilities of Large Language Models},
  year = {2024},
  journal = {arXiv.org},
  author = {Kasetty, Tejas and Mahajan, Divyat and Dziugaite, G. K. and Drouin, Alexandre and Sridhar, Dhanya}
}

Abstract

Numerous decision-making tasks require estimating causal effects under interventions on different parts of a system. As practitioners consider using large language models (LLMs) to automate decisions, studying their causal reasoning capabilities becomes crucial. A recent line of work evaluates LLMs ability to retrieve commonsense causal facts, but these evaluations do not sufficiently assess how LLMs reason about interventions. Motivated by the role that interventions play in causal inference, in this paper, we conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention. We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning. These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts. Our analysis on four LLMs highlights that while GPT- 4 models show promising accuracy at predicting the intervention effects, they remain sensitive to distracting factors in the prompts.