Cross-View Gait Recognition Based on U-Net

dc.contributor.authorIsrael Raul Tinini Alvarez
dc.contributor.authorGuillermo Sahonero-Alvarez
dc.coverage.spatialBolivia
dc.date.accessioned2026-03-22T14:56:09Z
dc.date.available2026-03-22T14:56:09Z
dc.date.issued2020
dc.descriptionCitaciones: 5
dc.description.abstractGait based recognition systems allow automatic subjects' recognition by using the way of walking. However, the performance of these systems is often degraded by some covariate factors such as walking direction, appearance changes, occlusions, among others. From these, it has been shown that change in appearance is the most influent covariant by drastically affecting the recognition performance. Consequently, inspired by the great successes of GANs in image translation tasks, we propose a method of gait recognition using a conditional generative model to generate view-invariant features. The proposed method is evaluated on one of the largest datasets available under the variations of view, clothing and carrying conditions: CASIA gait database B. Experimental results show that the proposed method outperforms state-of-the-art methods specially in carrying-bag and wearing-coat sequences. The full implementation and trained networks are available at https://gitlab.com/IsRaTiAl/gait.
dc.identifier.doi10.1109/ijcnn48605.2020.9207501
dc.identifier.urihttps://doi.org/10.1109/ijcnn48605.2020.9207501
dc.identifier.urihttps://andeanlibrary.org/handle/123456789/49416
dc.language.isoen
dc.sourceUniversidad Católica Bolivia San Pablo
dc.subjectComputer science
dc.subjectGait
dc.subjectArtificial intelligence
dc.subjectInvariant (physics)
dc.subjectPattern recognition (psychology)
dc.subjectComputer vision
dc.subjectTranslation (biology)
dc.titleCross-View Gait Recognition Based on U-Net
dc.typearticle

Files