Deep Margin Sensitive Representation Learning for Cross Domain Facial Expression Recognition

Deep Margin Sensitive Representation Learning for Cross Domain Facial Expression Recognition

Abstract:

Cross-domain Facial Expression Recognition (FER) aims to safely transfer the learned knowledge from labeled source data to unlabeled target data, which is challenging due to the subtle difference between various expressions and the large discrepancy between domains. Existing methods mainly focus on reducing the domain shift for transferable features but fail to learn discriminative representations for recognizing facial expression, which may result in negative transfer under cross-domain settings. To this end, we propose a novel Deep Margin-Sensitive Representation Learning (DMSRL) framework, which can extract multi-level discriminative features during sematic-aware domain adaptation. Specifically, we design a semantic metric learning module based on the category prior of source data and generated pseudo labels of target data, which can facilitate discriminative intra-domain representation learning and transferable inter-domain knowledge discovery by enlarging the category margin. Moreover, we develop a mutual information minimization module by simultaneously distilling the domain-invariant components and eliminating the domain-sensitive ones, which benefits discriminative transferable feature learning by generating accurate pseudo target labels. Furthermore, instead of only utilizing the global features, we formulate a multi-level feature extracting module to concurrently get the local ones, which contain detailed information to distinguish the small changes among different expressions. These modules are jointly utilized in our DMSRL in an end-to-end manner to ensure the positive transfer of source knowledge. Extensive experimental results on seven databases demonstrate that our DMSRL can achieve superior performance against state-of-the-art baselines.