Title :
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons
Author :
Chucai Yi ; YingLi Tian ; Arditi, Aries
Author_Institution :
Grad. Center, City Univ. of New York, New York, NY, USA
Abstract :
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives. To isolate the object from cluttered backgrounds or other surrounding objects in the camera view, we first propose an efficient and effective motion-based method to define a region of interest (ROI) in the video by asking the user to shake the object. This method extracts moving object region by a mixture-of-Gaussians-based background subtraction method. In the extracted ROI, text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object ROI, we propose a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binarized and recognized by off-the-shelf optical character recognition software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the state of the arts. The proof-of-concept prototype is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the system´s hardware. We explore user interface issues and assess robustness of the algorithm in extracting and reading text from different objects with complex backgrounds.
Keywords :
Gaussian processes; edge detection; feature extraction; handicapped aids; human computer interaction; image motion analysis; learning (artificial intelligence); optical character recognition; text detection; user interfaces; video cameras; video signal processing; Adaboost model; ICDAR-2003 robust reading datasets; ICDAR-2011 robust reading datasets; ROI; blind persons; camera-based assistive text reading framework; edge pixel distributions; gradient feature learning; handheld objects; mixture-of-Gaussians-based background subtraction method; motion-based method; moving object region extraction; off-the-shelf optical character recognition software; portable camera-based assistive text; product label reading; product packaging; recognized text codes; region of interest; stroke orientations; text characters; text information; text labels; text localization algorithm; text recognition; user interface; Assistive devices; blindness; distribution of edge pixels; hand-held objects; optical character recognition (OCR); stroke orientation; text reading; text region localization;
Journal_Title :
Mechatronics, IEEE/ASME Transactions on
DOI :
10.1109/TMECH.2013.2261083