DocumentCode :
3606177
Title :
Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching
Author :
John, Vijay ; Yoneda, Keisuke ; Zheng Liu ; Mita, Seiichi
Author_Institution :
Intell. Inf. Process. Lab., Toyota Technol. Inst., Nagoya, Japan
Volume :
1
Issue :
3
fYear :
2015
Firstpage :
159
Lastpage :
173
Abstract :
A critical issue in autonomous vehicle navigation and advanced driver assistance systems (ADAS) is the accurate real-time detection of traffic lights. Typically, vision-based sensors are used to detect the traffic light. However, the detection of traffic lights using computer vision, image processing, and learning algorithms is not trivial. The challenges include appearance variations, illumination variations, and reduced appearance information in low illumination conditions. To address these challenges, we present a visual camera-based real-time traffic light detection algorithm, where we identify the spatially constrained region-of-interest in the image containing the traffic light. Given, the identified region-of-interest, we achieve high traffic light detection accuracy with few false positives, even in adverse environments. To perform robust traffic light detection in varying conditions with few false positives, the proposed algorithm consists of two steps, an offline saliency map generation and a real-time traffic light detection. In the offline step, a convolutional neural network, i.e., a deep learning framework, detects and recognizes the traffic lights in the image using region-of-interest information provided by an onboard GPS sensor. The detected traffic light information is then used to generate the saliency maps with a modified multidimensional density-based spatial clustering of applications with noise (M-DBSCAN) algorithm. The generated saliency maps are indexed using the vehicle GPS information. In the real-time step, traffic lights are detected by retrieving relevant saliency maps and performing template matching by using colour information. The proposed algorithm is validated with the datasets acquired in varying conditions and different countries, e.g., USA, Japan, and France. The experimental results report a high detection accuracy with negligible false positives under varied illumination conditions. More importantly, an average computational - ime of 10 ms/frame is achieved. A detailed parameter analysis is conducted and the observations are summarized and reported in this paper.
Keywords :
image colour analysis; image denoising; image matching; lighting; neural nets; object detection; road traffic; traffic engineering computing; ADAS; Global Positioning System; M-DBSCAN algorithm; advanced driver assistance systems; autonomous vehicle navigation; colour information; computer vision; convolutional neural network; illumination condition; image processing; learning algorithms; multidimensional density-based spatial clustering of applications with noise algorithm; realtime traffic light detection; region-of-interest identification; saliency map generation; template matching; vehicle GPS information; vision-based sensors; Accuracy; Clustering algorithms; Image color analysis; Imaging; Lighting; Real-time systems; Vehicles; Convolutional Neural Network; DBSCAN; Saliency Maps; Traffic Light Detection;
fLanguage :
English
Journal_Title :
Computational Imaging, IEEE Transactions on
Publisher :
ieee
ISSN :
2333-9403
Type :
jour
DOI :
10.1109/TCI.2015.2480006
Filename :
7272062
Link To Document :
بازگشت