Browsing by Autor "Bernard Ghanem"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item type: Item , APES: Audiovisual Person Search in Untrimmed Video(2021) Juan León Alcázar; Long Mai; Federico Perazzi; Joon‐Young Lee; Pablo Arbeláez; Bernard Ghanem; Fabian Caba HeilbronHumans are arguably one of the most important subjects in video streams, many real-world applications such as video summarization or video editing workflows often require the automatic search and retrieval of a person of interest. Despite tremendous efforts in the person reidentification and retrieval domains, few works have developed audiovisual search strategies. In this paper, we present the Audiovisual Person Search dataset (APES), a new dataset composed of untrimmed videos whose audio (voices) and visual (faces) streams are densely annotated. APES contains over 1.9K identities labeled along 36 hours of video, making it the largest dataset available for untrimmed audiovisual person search. A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity. To showcase the potential of our new dataset, we propose an audiovisual baseline and benchmark for person retrieval. Our study shows that modeling audiovisual cues benefits the recognition of people's identities. To enable reproducibility and promote future research, the dataset annotations and baseline code are available at: https://github.com/fuankarion/audiovisual-person-searchItem type: Item , Gabor Layers Enhance Network Robustness(Springer Science+Business Media, 2020) Juan C. Pérez; Motasem Alfarra; Guillaume Jeanneret; Adel Bibi; Ali Thabet; Bernard Ghanem; Pablo ArbeláezItem type: Item , StyleAvatar: Stylizing Animatable Head Avatars(2024) Juan C. Pérez; Thu Nguyen-Phuoc; Chen Cao; Artsiom Sanakoyeu; Tomas Simon; Pablo Arbeláez; Bernard Ghanem; Ali Thabet; Albert PumarolaAR/VR applications promise to provide people with a genuine feeling of mutual presence when communicating via their personalized avatars. While realistic avatars are essential in various social settings, the vast possibilities of a virtual world can also generate interest in using stylized avatars for other purposes. We introduce StyleAvatar, the first method for semantic stylization of animatable head avatars. StyleAvatar directly stylizes the avatar representation, rather than stylizing its renders. Specifically, given a model generating the avatar, StyleAvatar first disentangles geometry and texture manipulations, and then stylizes the avatar by fine-tuning a subset of the model’s weights. Our method has multiple virtues, including the ability to describe styles using images or text, preserving the avatar’s animatable capacity, providing control over identity preservation, and disentangling texture and geometry modifications. Experiments have shown that our approach consistently works across skin tones, challenging hair styles, extreme views, and diverse facial expressions. <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup>