Cite

Facial features tracking is widely used in face recognition, gesture, expression analysis, etc. AAM (Active Appearance Model) is one of the powerful methods for objects feature localization. Nevertheless, AAM still suffers from a few drawbacks, such as the view angle change problem. We present a method to solve it by using the depth data acquired from Kinect. We use the depth data to get the head pose information and RGB data to match the AAM result. We establish an approximate facial 3D gird model and then initialize the subsequent frames with this model and head pose information. To avoid the local extremum, we divide the model into several parts by the poses and match the facial features with the closest model. The experimental results show improvement of AAM performance when rotating the head.

eISSN:
1314-4081
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, Information Technology