ChiNet Deep Recurrent Convolutional Learning for Multimodal Spacecraft Pose Estimation

ChiNet Deep Recurrent Convolutional Learning for Multimodal Spacecraft Pose Estimation

Abstract:

This article presents an innovative deep learning pipeline, which estimates the relative pose of a spacecraft by incorporating the temporal information from a rendezvous sequence. It leverages the performance of long short-term memory units in modeling sequences of data for the processing of features extracted by a convolutional neural network (CNN) backbone. Three distinct training strategies, which follow a coarse-to-fine funneled approach, are combined to facilitate feature learning and improve end-to-end pose estimation by regression. The capability of CNNs to autonomously ascertain feature representations from images is exploited to fuse thermal infrared data with electrooptical red–green–blue inputs, thus mitigating the effects of artifacts from imaging space objects in the visible wavelength. Each contribution of the proposed framework, dubbed ChiNet, is demonstrated on a synthetic dataset, and the complete pipeline is validated on experimental data.