Facial features segmentation, analysis and recognition of facial expressions by the Transferable Belief Model
Facial features segmentation, analysis and recognition of facial expressions by the Transferable Belief Model The aim of this work is the analysis and the classification of facial expressions. Experiments in psychology show that human is able to recognize the emotions based on the visualization of the temporal evolution of some characteristic fiducial points. Thus we firstly propose an automatic system for the extraction of the permanent facial features (eyes, eyebrows and lips). In this work we are interested in the problem of the segmentation of the eyes and the eyebrows. The segmentation of lips contours is based on a previous work developed in the laboratory. The proposed algorithm for eyes and eyebrows contours segmentation consists of three steps: firstly, the definition of parametric models to fit as accurate as possible the contour of each feature; then, a whole set of characteristic points is detected to initialize the selected models in the face; finally, the initial models are finally fitted by taking into account the luminance gradient information. The segmentation of the eyes, eyebrows and lips contours leads to what we call skeletons of expressions. To measure the characteristic features deformation, five characteristic distances are defined on these skeletons. Based on the state of these distances a whole set of logical rules is defined for each one of the considered expression: Smile, Surprise, Disgust, Anger, Fear, Sadness and Neutral. These rules are compatible with the standard MPEG-4 which provides a description of the deformations undergone by each facial feature during the production of the six universal facial expressions. However the human behavior is not binary, a pure expression is rarely produced. To be able to model the doubt between several expressions and to model the unknown expressions, the Transferable Belief Model is used as a fusion process for the facial expressions classification. The classification system takes into account the evolution of the facial features deformation in the course of the time. Towards an audio-visual system for emotional expressions classification, a preliminary study on vocal expressions is also proposed.
