Facial Expression Recognition using Local Gravitational Force Descriptor based Deep Convolution Neur
Facial Expression Recognition using Local Gravitational Force Descriptor based Deep Convolution Neur
Abstract
An image is worth a thousand words; hence, a face
image illustrates extensive details about the specification, gender,
age, and emotional states of mind. Facial expressions play an
important role in community-based interactions and are often
used in the behavioral analysis of emotions. Recognition of
automatic facial expressions from a facial image is a challenging
task in the computer vision community and admits a large set of
applications, such as driver safety, human–computer interactions,
health care, behavioral science, video conferencing, cognitive
science, and others. In this work, a deep-learning-based scheme
is proposed for identifying the facial expression of a person. The
proposed method consists of two parts. The former one finds
out local features from face images using a local gravitational
force descriptor, while, in the latter part, the descriptor is fed
into a novel deep convolution neural network (DCNN) model. The
proposed DCNN has two branches. The first branch explores geometric features, such as edges, curves, and lines, whereas holistic
features are extracted by the second branch. Finally, the scorelevel fusion technique is adopted to compute the final classification score. The proposed method along with 25 state-of-the-art
methods is implemented on five benchmark available databases