Using Game Play to Investigate Multimodal and Conversational Grounding in Large Multimodal Models
Hakimov, Sherzod and Abdullayeva, Yerkezhan and Koshti, Kushal and Schmidt, Antonia and Weiser, Yan and Beyer, Anne and Schlangen, David
While the situation has improved for text-only models, it again seems to be the case currently that multimodal (text and image) models develop faster than ways to evaluate them. In this paper, we bring a recently developed evaluation paradigm from text models to multimodal models, namely evaluation through the goal-oriented game (self) play, complementing reference-based and preference-based evaluation. Specifically, we define games that challenge a model‘s capability to represent a situation from visual information and align such representations through dialogue. We find that the largest closed models perform rather well on the games that we define, while even the best open-weight models struggle with them. On further analysis, we find that the exceptional deep captioning capabilities of the largest models drive some of the performance. There is still room to grow for both kinds of models, ensuring the continued relevance of the benchmark.
In Proceedings of the 31st International Conference on Computational Linguistics , 2025[PDF]
@inproceedings{Hakimov-2025, title = {Using Game Play to Investigate Multimodal and Conversational Grounding in Large Multimodal Models}, author = {Hakimov, Sherzod and Abdullayeva, Yerkezhan and Koshti, Kushal and Schmidt, Antonia and Weiser, Yan and Beyer, Anne and Schlangen, David}, editor = {Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven}, booktitle = {Proceedings of the 31st International Conference on Computational Linguistics}, month = jan, year = {2025}, address = {Abu Dhabi, UAE}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2025.coling-main.381/}, pages = {5686--5718} }