Learning Communication Policies for Different Follower Behaviors in a Collaborative Reference Game
Sadler, Philipp and Hakimov, Sherzod and Schlangen, David
In this work, we evaluate the adaptability of neural agents towards assumed partner behaviors in a collaborative reference game. In this game, success is achieved when a knowledgeable guide can verbally lead a follower to the selection of a specific puzzle piece among several distractors. We frame this language grounding and coordination task as a reinforcement learning problem and measure to which extent a common reinforcement training algorithm (PPO) is able to produce neural agents (the guides) that perform well with various heuristic follower behaviors that vary along the dimensions of confidence and autonomy. We experiment with a learning signal that in addition to the goal condition also respects an assumed communicative effort. Our results indicate that this novel ingredient leads to communicative strategies that are less verbose (staying silent in some of the steps) and that with respect to that the guide’s strategies indeed adapt to the partner’s level of confidence and autonomy.
In Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024) , 2024[PDF]
@inproceedings{Sadler-2024, title = {Learning Communication Policies for Different Follower Behaviors in a Collaborative Reference Game}, author = {Sadler, Philipp and Hakimov, Sherzod and Schlangen, David}, editor = {Kordjamshidi, Parisa and Wang, Xin Eric and Zhang, Yue and Ma, Ziqiao and Inan, Mert}, booktitle = {Proceedings of the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024)}, month = aug, year = {2024}, address = {Bangkok, Thailand}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2024.splurobonlp-1.2}, pages = {17--29} }