Treffer: Fisher Motion Descriptor for Multiview Gait Recognition.

Title:
Fisher Motion Descriptor for Multiview Gait Recognition.
Source:
International Journal of Pattern Recognition & Artificial Intelligence; Jan2017, Vol. 31 Issue 1, p-1, 40p
Database:
Complementary Index

Weitere Informationen

The goal of this paper is to identify individuals by analyzing their gait. Instead of using binary silhouettes as input data (as done in many previous works) we propose and evaluate the use of motion descriptors based on densely sampled short-term trajectories. We take advantage of state-of-the-art people detectors to define custom spatial configurations of the descriptors around the target person, obtaining a rich representation of the gait motion. The local motion features (described by the Divergence-Curl-Shear descriptor [M. Jain, H. Jegou and P. Bouthemy, Better exploiting motion for better action recognition, in Proc. IEEE Conf. Computer Vision Pattern Recognition (CVPR) (2013), pp. 2555-2562.]) extracted on the different spatial areas of the person are combined into a single high-level gait descriptor by using the Fisher Vector encoding [F. Perronnin, J. Sánchez and T. Mensink, Improving the Fisher kernel for large-scale image classification, in Proc. European Conf. Computer Vision (ECCV) (2010), pp. 143-156]. The proposed approach, coined Pyramidal Fisher Motion, is experimentally validated on 'CASIA' dataset [S. Yu, D. Tan and T. Tan, A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition, in Proc. Int. Conf. Pattern Recognition, Vol. 4 (2006), pp. 441-444]. (parts B and C), 'TUM GAID' dataset, [M. Hofmann, J. Geiger, S. Bachmann, B. Schuller and G. Rigoll, The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits, J. Vis. Commun. Image Represent. 25(1) (2014) 195-206]. 'CMU MoBo' dataset [R. Gross and J. Shi, The CMU Motion of Body (MoBo) database, Technical Report CMU-RI-TR-01-18, Robotics Institute (2001)]. and the recent 'AVA Multiview Gait' dataset [D. López-Fernández, F. Madrid-Cuevas, A. Carmona-Poyato, M. Marín-Jiménez and R. Muñoz-Salinas, The AVA multi-view dataset for gait recognition, in Activity Monitoring by Multiple Distributed Sensing, Lecture Notes in Computer Science (Springer, 2014), pp. 26-39]. The results show that this new approach achieves state-of-the-art results in the problem of gait recognition, allowing to recognize walking people from diverse viewpoints on single and multiple camera setups, wearing different clothes, carrying bags, walking at diverse speeds and not limited to straight walking paths. [ABSTRACT FROM AUTHOR]

Copyright of International Journal of Pattern Recognition & Artificial Intelligence is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)