Enhanced Visible Light Localization Based on Machine Learning and Optimized Fingerprinting in Wirele

Enhanced Visible Light Localization Based on Machine Learning and Optimized Fingerprinting in Wirele

Abstract:

This article presents a robust visible light localization (VLL) technique for wireless sensor networks, with 2-D indoor positioning (IP) capabilities, based on embedded machine learning (ML) running on low-cost low-power microcontrollers. The implemented VLL technique uses four optical sources (i.e., LEDs), modulated at different frequencies. In particular, the received signal strengths (RSSs) of optical signals are evaluated by a microcontroller on board the sensor nodes via fast Fourier transform (FFT). RSSs are fed to four embedded ML regressors, aiming at estimating the target position within the workspace. The four neural networks (NNs), one per each possible triplet of LEDs, are trained by exploiting a novel technique to generate the training datasets. This method, called optimized fingerprinting (OF), allows for creating arbitrarily ample datasets by performing only few measurements in the field, avoiding time-consuming steps for collecting experimental data. The NNs are devised to be accurate yet lightweight facilitating their implementation and execution by the microcontroller. Furthermore, due to the presence of four NNs, four position estimates are obtained. This redundancy is exploited to detect and effectively manage situations of total or partial shading of one light source and to enhance the positioning accuracy under normal operating conditions (i.e., no obstacles), by averaging the four positions. Test results performed in a 1×1 m workspace show an overall mean accuracy of about 1 cm with standard deviation below the centimeter and maximum error around 3 cm.