Article Preview
Top1. Introduction
Text created using still images or moving images also known as visual text contains valuable information and is exploited in many content based image and video analysis tasks. Such text finds application in content-based image search, video information retrieval, mobile based text analysis and recognition (Zhong, Zhang & Jain, 2000; Li, Doermann & Kia, 2000; Weinman, Miller & Hanson, 2009; Yin, Hao, Sun & Naoi, 2011; Shivakumara, Trung & Tan, 2011). Text detection and recognition can play vital role in day to day life of humans and in future can be part of so many computer applications: These help in developing language translation & navigation and text conversion system, systems to count characters, words in a document, identify the type of script, thereby helping in palaeography study, enables license or container plate recognition, indexing the documents having high level of visual content by using text based search engines, provides higher compression rates and better image quality by enabling segmentation of textual regions from other regions as used in object based coding like MPEG 4 etc. However due to complex background, and variations of font, size, colour, styles (Ksouri & Hidri, 2015) and orientation, text in images has to be robustly detected before being recognized and retrieved. Extracting and recognizing such text from visual clues is one of the most difficult and important task in the computer vision community and an emerging area of ongoing research of recent years. Nowadays text detection can roughly be categorised into three groups: Sliding window based methods (Chen & Yuille, 2004; Lee, Lee, Yuille & Koch, 2011; K. Kim, Jung & J. Kim, 2003), connected component based methods (Epshtein, Ofek & Wexler, 2010; Yi & Tian, 2011, 2012, 2013; Thillou & Gosselin, 2007) and hybrid methods (Pan, Hou & Liu, 2011). Sliding window based methods are also known as region-based method. It uses a sliding window to search for possible texts in the image and then use machine learning techniques to identify text. These methods are slow as the image has to be processed in multiple scales. Connected component based methods extract character candidates from images by connected component analysis. It uses a bottom-up approach by grouping small components into successively larger components until all regions are identified in the image. A geometrical analysis is needed to merge the text components using the spatial arrangement of the components so as to filter out non-text components and mark the boundaries of the text regions. Hybrid method is presented by Pan et al., 2011) and aims to exploit a region detector to detect text candidates and extracts connected components as character candidates by local binarization. Non-characters are eliminated with a Conditional Random Fields Model (Lafferty, McCallum & Perreira, 2001) and characters can finally be grouped into text (Viola & Jones, 2001).