Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

dc.contributor.authorKristen Grauman
dc.contributor.authorAndrew Westbury
dc.contributor.authorLorenzo Torresani
dc.contributor.authorKris Kitani
dc.contributor.authorJitendra Malik
dc.contributor.authorTriantafyllos Afouras
dc.contributor.authorKumar Ashutosh
dc.contributor.authorVijay Baiyya
dc.contributor.authorSiddhant Bansal
dc.contributor.authorBikram Boote
dc.coverage.spatialBolivia
dc.date.accessioned2026-03-22T13:56:02Z
dc.date.available2026-03-22T13:56:02Z
dc.date.issued2024
dc.descriptionCitaciones: 65
dc.description.abstractWe present Ego-Exo4D, a diverse, large-scale multi-modal multiview video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured ego-centric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair). 740 participants from 13 cities worldwide performed these activities in 123 different natural scene contexts, yielding long-form captures from 1 to 42 minutes each and 1,286 hours of video combined. The multimodal nature of the dataset is un-precedented: the video is accompanied by multichannel audio, eye gaze, 3D point clouds, camera poses, IMU, and multiple paired language descriptions-including a novel “expert commentary” done by coaches and teachers and tailored to the skilled-activity domain. To push the frontier of first-person video understanding of skilled human activity, we also present a suite of benchmark tasks and their annotations, including fine-grained activity understanding, proficiency estimation, cross-view translation, and 3D hand/body pose. All resources are open sourced to fuel new research in the community.
dc.identifier.doi10.1109/cvpr52733.2024.01834
dc.identifier.urihttps://doi.org/10.1109/cvpr52733.2024.01834
dc.identifier.urihttps://andeanlibrary.org/handle/123456789/43570
dc.language.isoen
dc.sourceUniversity of Bristol
dc.subjectId, ego and super-ego
dc.subjectPsychology
dc.subjectComputer science
dc.titleEgo-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
dc.typearticle

Files