An YanXin WangTsu-Jui FuWilliam Yang Wang2026-03-222026-03-22202110.18653/v1/2021.eacl-main.196https://doi.org/10.18653/v1/2021.eacl-main.196https://andeanlibrary.org/handle/123456789/83609Citaciones: 1Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs. Suppose there are two images, I 1 and I 2 , and the task is to generate a description W 1,2 comparing them, existing methods directly model I 1 , I 2 W 1,2 mapping without the semantic understanding of individuals. In this paper, we introduce a Learningto-Compare (L2C) model, which learns to understand the semantic structures of these two images and compare them while learning to describe each one. We demonstrate that L2C benefits from a comparison between explicit semantic representations and singleimage captions, and generalizes better on the new testing image pairs. It outperforms the baseline on both automatic evaluation and human evaluation for the Birds-to-Words dataset.enClosed captioningComputer scienceTask (project management)Image (mathematics)Baseline (sea)Natural language processingArtificial intelligenceMachine learningInformation retrievalL2C: Describing Visual Differences Needs Semantic Understanding of Individualspreprint