Abstract
During conversation, speakers collaborate on spontaneous referring expressions, which they can then re-use in subsequent conversation with the same partner. Understanding such referring expressions is an important ability for an embodied agent so that it can carry out tasks in the real world. This requires integrating and understanding language, vision, and conversational interaction. We study the capabilities of seven state-of-the-art Large Vision Language Models (LVLMs) as overhearers to a corpus of spontaneous conversations between pairs of human discourse participants engaged in a collaborative object-matching task. We find that such a task remains challenging for current LVLMs, which fail to show a consistent performance improvement as they overhear more conversations from the same discourse participants repeating the same task for multiple rounds. We release our corpus and code for reproducibility and to facilitate future research.
Main Results
The main result is that overhearer matching remains challenging for current LVLMs. Accuracy trends across rounds show that state-of-the-art models still fail to achieve robust, consistent improvements as they overhear repeated interactions.
Average accuracy of various LVLMs in the overhearer task over rounds. Shaded areas and error bars denote 95% confidence intervals.
Robustness analysis further shows substantial variance across human pairs and object orderings. Even top-performing models show unstable outcomes, highlighting sensitivity to interaction context and indicating limited reliability in overhearing settings.
Round 1 accuracy boxplots of two best-performing LVLMs across 10 human pairs. Each boxplot summarizes 30 runs with different object orderings.
Conclusion
Our findings demonstrate that modern LVLMs still struggle to resolve referring expressions to real-world objects produced during spontaneous conversation, a task that humans excel at when they can ground meanings together. Overhearers, whether human or LVLM, perform more poorly in a matching task than human addressees, even when they are present to every word of a conversation. LVLMs in the overhearer role, even state-of-the-art models, fail to exploit the dynamic nature of conversation and do not improve over repeated referring, unlike human overhearers. These limitations constrain the practical utility of LVLMs as embodied agents, while also highlighting clear directions for future improvement. Given that our primary goal is to benchmark current LVLM capabilities in this novel overhearing setting, providing mechanistic insights or finding pathways to solutions is beyond the scope of our paper and left to future studies. We release our corpus for reproducibility and to support continued research in this area.
BibTeX
@inproceedings{wang-etal-2025-lvlms,
title = "{LVLM}s are Bad at Overhearing Human Referential Communication",
author = "Wang, Zhengxiang and
Li, Weiling and
Kaliosis, Panagiotis and
Rambow, Owen and
Brennan, Susan",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.849/",
doi = "10.18653/v1/2025.emnlp-main.849",
pages = "16758--16782",
ISBN = "979-8-89176-332-6"
}