• DocumentCode
    425397
  • Title

    Nonlinear Generative Models for Dynamic Shape and Dynamic Appearance

  • Author

    Elgammal, Ahmed

  • Author_Institution
    Rutgers University, Piscataway, NJ
  • fYear
    2004
  • fDate
    27-02 June 2004
  • Firstpage
    182
  • Lastpage
    182
  • Abstract
    Our objective is to learn representations for the shape and the appearance of moving (dynamic) objects that supports tasks such as synthesis, pose recovery, reconstruction and tracking. In this paper we introduce a framework that aim to learn a landmark-free correspondence-free global representations of dynamic appearance manifolds. We use nonlinear dimensionality reduction to achieve an embedding of the global deformation manifold that preserves the geometric structure of the manifold. Given such embedding, a nonlinear mapping is learned from the embedding space into the visual input space. Therefore, any visual input is represented by a linear combination of nonlinear bases functions centered along the manifold in the embedding space. We also show how approximate solution for the inverse mapping can be obtained in a closed form which facilitate recovery of the intrinsic body configuration. We use the framework to learn the gait manifold as an example of a dynamic shape manifold, as well as to learn the manifolds for some simple gestures and facial expressions as examples of dynamic appearance manifolds.
  • Keywords
    Computer science; Face recognition; Humans; Image reconstruction; Legged locomotion; Lighting; Object recognition; Principal component analysis; Shape; Tensile stress;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition Workshop, 2004. CVPRW '04. Conference on
  • Type

    conf

  • DOI
    10.1109/CVPR.2004.133
  • Filename
    1384982