Abstract:
Traffic object detection and recognition systems play an essential role in Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV). In this research, we focus on four important classes of traffic objects: traffic signs, road vehicles, pedestrians, and traffic lights. We first review the major traditional machine learning and deep learning methods that have been used in the literature to detect and recognize these objects. We provide a vision-based framework that detects and recognizes traffic objects inside and outside the attentional visual area of drivers. This approach uses the driver 3D absolute coordinates of the gaze point obtained by the combined, cross-calibrated use of a front-view stereo imaging system and a non-contact 3D gaze tracker. A combination of multi-scale HOG-SVM and Faster R-CNN-based models are utilized in the detection stage. The recognition stage is performed with a ResNet-101 network to verify sets of generated hypotheses. We applied our approach on real data collected during drives in an urban environment with the RoadLAB instrumented vehicle. Our framework achieved 91% of correct object detections and provided promising results in the object recognition stage.