Vision Based Hand Gesture Recognition Using 3D Shape Context

Vision Based Hand Gesture Recognition Using 3D Shape Context

Abstract:

Hand gesture recognition plays an important role in robot vision and makes human-robot interaction more flexible and convenient. Among the hand gesture features, shape is a meaningful and informative cue and the effective representation of hand shape is critical for recognition. In this paper, we propose a novel method to capture the shape information of 3D hand gestures. Hand shapes are segmented from the depth images which are captured by the Kinect sensor with cluttered backgrounds. Given the surface of the hand shape, we construct vectors and build histograms based on the vector division. Then a hand gesture is represented by a 3D Shape Context descriptor with rich 3D information. The Dynamic Time Warping algorithm is finally used for hand gesture recognition. Extensive experiments on two benchmark datasets are conducted to test the proposed method and the experimental results verify that the proposed method outperforms the recent related methods.