Dimension Reduction Using Image Transform for Content-Based Feature Extraction

Dimension Reduction Using Image Transform for Content-Based Feature Extraction

Sourav De (Cooch Behar Government Engineering College, India), Madhumita Singha (Xavier Institute of Social Service, India), Komal Kumari (Xavier Institute of Social Service, India), Ritika Selot (Xavier Institute of Social Service, India) and Akshat Gupta (Xavier Institute of Social Service, India)
Copyright: © 2018 |Pages: 15
DOI: 10.4018/978-1-5225-5775-3.ch002


Technological advancements in the field of machine learning have attempted classification of the images of gigantic datasets. Classification with content-based image feature extraction categorizes the images based on the image content in contrast to conventional text-based annotation. The chapter has presented a feature extraction technique based on application of image transform. The method has extracted meaningful features and facilitated feature dimension reduction. A technique, known as fractional coefficient of transforms, is adopted to facilitate feature dimension reduction. Two different color spaces, namely RGB and YUV, are considered to compare the classification metrics to figure out the best possible reduced feature dimension. Further, the results are compared to state-of-the-art techniques which have revealed improved performance for the proposed feature extraction technique.
Chapter Preview


A vast growth for image datasets is noticed due to easy availability of high end image capturing devices for which the requirement for feature extraction efficiency to promptly identify the database images has also seen a steep rise (Thepade, 2017). Hence, the computational capabilities need to be increased by leaps and bounds in order to save time and resources and also speed up the applications. The process of image identification has been governed predominantly by text-based annotation for assorted applications in diverse domains (Das, 2017a). The classification results are computed based on the text representation of the image content which is often dependent on the vocabulary of the data entry operator and frequently turns out to be inconsistent. The business functions also get adversely affected by the inappropriate classification result generation which results in adverse effects on its revenue generation. The idea of content based image classification has emerged as a fruitful alternative and has considered the content of the image as image features in contrast to the text based annotation process to identify image data (Das, 2017b).This feature may represent a single attribute of the image or may be a composition of different attributes (Lee, 2013; Shaikh, 2013; Liu, 2013; Yanli, 2012; Thepade, 2015).The process of applying image transform reduces the image to its compact form and thus reduces the feature vector size of the image. The reduced size of feature vectors in turn diminishes computational overhead for classification process (Jing, 2004). Content based image classification has assorted applications in criminology, computer aided diagnosis, military, GIS and various other fields. It is proved to be productive in the engineering and architectural projects. It is an aid for the advertising and publishing sector where the journalists keep a record of the various events and advertisements. In archaeology and historical research, maintaining such records assists in research as reference can be drawn for the final image obtained post excavation (Das, 2015a). The custom-built image feature extraction process may include interference with the complete image surface. The paper represents a technique to maneuver only a small part of the actual image and work upon the same to make the match more accurate. The image is broken into its constituent component colors namely, red (R), green (G) and blue (B), which are the primary color components. Discrete Cosine Transform (DCT) is applied on each of the components for feature extraction using image transformation (Kekre, 2010; Das, 2015b). The test images are augmented by a technique named odd image formation on which DCT is applied to extract partial coefficients of transforms as image features (Thepade, 2013). The image preprocessing part using odd image formation increases the efficiency of feature extraction and helps in achieving elevated accuracy in classification outputs. Therefore, the authors have attempted to figure out essential and necessary feature components contributing to the classification accuracy by eliminating the features which are not significant to improve classification results. Thus the classification of images is performed on the basis of content of the images represented by features extracted with partial energy coefficients. The results of classification are compared to that of the benchmarked content-based feature extraction techniques and the proposed technique has outperformed based on classification metrics.

Complete Chapter List

Search this Book: