Triangulating LLM Progress through Benchmarks, Games, and Cognitive Tests

Momentè, Filippo and Suglia, Alessandro and Giulianelli, Mario and Ferrari, Ambra and Koller, Alexander and Lemon, Oliver and Schlangen, David and Fernández, Raquel and Bernardi, Raffaella

We examine three evaluation paradigms: standard benchmarks (e.g., MMLU and BBH), interactive games (e.g., Signalling Games or Taboo), and cognitive tests (e.g., for working memory or theory of mind). First, we investigate which of the former two—benchmarks or games—is most effective at discriminating LLMs of varying quality. Then, inspired by human cognitive assessments, we compile a suite of targeted tests that measure cognitive abilities deemed essential for effective language use, and we investigate their correlation with model performance in benchmarks and games. Our analyses reveal that interactive games are superior to standard benchmarks in discriminating models. Causal and logical reasoning correlate with both static and interactive tests, while differences emerge regarding core executive functions and social/emotional skills, which correlate more with games. We advocate for the development of new interactive benchmarks and targeted cognitive tasks inspired by assessing human abilities but designed specifically for LLMs.

In Findings of the Association for Computational Linguistics: EMNLP 2025 , 2025
[PDF]
@inproceedings{Momente-2025,
  title = {Triangulating {LLM} Progress through Benchmarks, Games, and Cognitive Tests},
  author = {Moment{\`e}, Filippo and Suglia, Alessandro and Giulianelli, Mario and Ferrari, Ambra and Koller, Alexander and Lemon, Oliver and Schlangen, David and Fern{\'a}ndez, Raquel and Bernardi, Raffaella},
  editor = {Christodoulopoulos, Christos and Chakraborty, Tanmoy and Rose, Carolyn and Peng, Violet},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2025},
  month = nov,
  year = {2025},
  address = {Suzhou, China},
  publisher = {Association for Computational Linguistics},
  url = {https://aclanthology.org/2025.findings-emnlp.1092/},
  doi = {10.18653/v1/2025.findings-emnlp.1092},
  pages = {20051--20072},
  isbn = {979-8-89176-335-7}
}