Does learning the right latent variables necessarily improve in-context learning?


Journal article


Sarthak Mittal, Eric Elmoznino, L'eo Gagnon, Sangnie Bhardwaj, Dhanya Sridhar, Guillaume Lajoie
arXiv.org, 2024

Semantic Scholar ArXiv DBLP DOI
Cite

Cite

APA   Click to copy
Mittal, S., Elmoznino, E., Gagnon, L., Bhardwaj, S., Sridhar, D., & Lajoie, G. (2024). Does learning the right latent variables necessarily improve in-context learning? ArXiv.org.


Chicago/Turabian   Click to copy
Mittal, Sarthak, Eric Elmoznino, L'eo Gagnon, Sangnie Bhardwaj, Dhanya Sridhar, and Guillaume Lajoie. “Does Learning the Right Latent Variables Necessarily Improve in-Context Learning?” arXiv.org (2024).


MLA   Click to copy
Mittal, Sarthak, et al. “Does Learning the Right Latent Variables Necessarily Improve in-Context Learning?” ArXiv.org, 2024.


BibTeX   Click to copy

@article{sarthak2024a,
  title = {Does learning the right latent variables necessarily improve in-context learning?},
  year = {2024},
  journal = {arXiv.org},
  author = {Mittal, Sarthak and Elmoznino, Eric and Gagnon, L'eo and Bhardwaj, Sangnie and Sridhar, Dhanya and Lajoie, Guillaume}
}

Abstract

Large autoregressive models like Transformers can solve tasks through in-context learning (ICL) without learning new weights, suggesting avenues for efficiently solving new tasks. For many tasks, e.g., linear regression, the data factorizes: examples are independent given a task latent that generates the data, e.g., linear coefficients. While an optimal predictor leverages this factorization by inferring task latents, it is unclear if Transformers implicitly do so or if they instead exploit heuristics and statistical shortcuts enabled by attention layers. Both scenarios have inspired active ongoing work. In this paper, we systematically investigate the effect of explicitly inferring task latents. We minimally modify the Transformer architecture with a bottleneck designed to prevent shortcuts in favor of more structured solutions, and then compare performance against standard Transformers across various ICL tasks. Contrary to intuition and some recent works, we find little discernible difference between the two; biasing towards task-relevant latent variables does not lead to better out-of-distribution performance, in general. Curiously, we find that while the bottleneck effectively learns to extract latent task variables from context, downstream processing struggles to utilize them for robust prediction. Our study highlights the intrinsic limitations of Transformers in achieving structured ICL solutions that generalize, and shows that while inferring the right latents aids interpretability, it is not sufficient to alleviate this problem.