clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents

Chalamalasetti, Kranti and Götze, Jana and Hakimov, Sherzod and Madureira, Brielen and Sadler, Philipp and Schlangen, David

Recent work has proposed a methodology for the systematic evaluation of “Situated Language Understanding Agents” — agents that operate in rich linguistic and non-linguistic contexts — through testing them in carefully constructed interactive settings. Other recent work has argued that Large Language Models (LLMs), if suitably set up, can be understood as (simulators of) such agents. A connection suggests itself, which this paper explores: Can LLMs be evaluated meaningfully by exposing them to constrained game-like settings that are built to challenge specific capabilities? As a proof of concept, this paper investigates five interaction settings, showing that current chat-optimised LLMs are, to an extent, capable of following game-play instructions. Both this capability and the quality of the game play, measured by how well the objectives of the different games are met, follows the development cycle, with newer models generally performing better. The metrics even for the comparatively simple example games are far from being saturated, suggesting that the proposed instrument will remain to have diagnostic value.

In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , 2023
[PDF]
@inproceedings{Chalamalasetti-2023,
  title = {clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents},
  author = {Chalamalasetti, Kranti and G{\"o}tze, Jana and Hakimov, Sherzod and Madureira, Brielen and Sadler, Philipp and Schlangen, David},
  editor = {Bouamor, Houda and Pino, Juan and Bali, Kalika},
  booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
  month = dec,
  year = {2023},
  address = {Singapore},
  publisher = {Association for Computational Linguistics},
  url = {https://aclanthology.org/2023.emnlp-main.689},
  doi = {10.18653/v1/2023.emnlp-main.689},
  pages = {11174--11219}
}