Real Time Facial Expression Recognition Based on Edge Computing

Real Time Facial Expression Recognition Based on Edge Computing

Abstract:

In recent years, many large-scale information systems in the Internet of Things (IoT) can be converted into interdependent sensor networks, such as smart cities, smart medical systems, and industrial Internet systems. The successful application of edge computing in the IoT will make our algorithms faster, more convenient, lower overall costs, providing better business practices, and enhance sustainability. Facial action unit (AU) detection recognizes facial expressions by analyzing cues about the movement of certain atomic muscles in the local facial area. According to the detected facial feature points, we could calculate the values of AU, and then use classification algorithms for emotion recognition. In edge devices, using optimized and custom algorithms to directly process the raw image data from each camera, the detected emotions can be more easily transmitted to the end-user. Due to the tremendous network overhead of transferring the facial action unit feature data, it poses challenges of a real-time facial expression recognition system being deployed in a distributed manner while running in production. Therefore, we designed a lightweight edge computing-based distributed system using Raspberry Pi tailed for this need, and we optimized the data transfer and components deployment. In the vicinity, the front-end and back-end processing modes are separated to reduce round-trip delay, thereby completing complex computing tasks and providing high-reliability, large-scale connection services. For IoT or smart city applications and services, they can be made into smart sensing systems that can be deployed anywhere with network connections.