Recognition of 3D Objects from 2D Views Features

Recognition of 3D Objects from 2D Views Features

R. Khadim, R. El Ayachi, Mohamed Fakir
Copyright: © 2015 |Pages: 9
DOI: 10.4018/JECO.2015040105
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper focuses on the recognition of 3D objects using 2D attributes. In order to increase the recognition rate, the present an hybridization of three approaches to calculate the attributes of color image, this hybridization based on the combination of Zernike moments, Gist descriptors and color descriptor (statistical moments). In the classification phase, three methods are adopted: Neural Network (NN), Support Vector Machine (SVM), and k-nearest neighbor (KNN). The database COIL-100 is used in the experimental results.
Article Preview
Top

1. Introduction

The subject of recognition system is to find and attribute the appropriate references objects of database to the query object. In this work, the system adopted use two steps after the acquisition phase of the query object: Extraction and classification.

A central problem of extraction step is the choice of an appropriate method to calculate the object primitives. The computed attributes must be invariant to some representations of the object (rotation, scale changes and translation). Several 2D methods are used in this phase, such as: Zernike moments, Hu moments, Gist descriptors and color descriptor (statistical moments). The originality of this work is the purpose of a new approach which combines the three methods mentioned (Zernike moments, Gist descriptors and color descriptor).

The robustness of the recognition system based on the results obtained in the classification phase. This last step appears in the attribute of an appropriate references objects from a database used. There are several methods, but, we choice three of them for two reasons: faster in the implementation and efficiently in the recognition pattern. The choosing methods are: Neural Networks (NN), Support Vector Machines (SVM) and k-nearest neighbor (KNN).

A multilayer neural network consists of an input layer including a set of input nodes, they are used in several papers; (Paméla Daum & all, 2012) have adopted neural networks in recognition objetcs, (Y. Cao & all, 2011) have used neural networks in the image annotation. In this paper, we use a multilayer neural network with a supervisory training which consists of:

  • Input layer: M input cells (M represents the element number of descriptor vector)

  • Hidden layer: L neurons (L = 50 random number)

  • Output layer: N neurons (N represents the class numbers)

  • Transfer function: sigmoid function

SVM is a classification method which is based on finding a hyper-plan that separates data sets into two classes (K.-B. Duan & all, 2003). Several methods have been proposed to construct a multi-class classifier by combining one-against-one binary classifiers or one-against-all binary classifiers. The data sets can be linearly separable or nonlinearly separable. The nonlinearly separable cases require the use of kernel function in order to obtain linearly separable datasets. In our case, the one-against-one binary classifier is used. This classifier is based on the Gaussian kernel function.

The subject of nearest neighbor classifier (M. Oujaoura & all, 2012, Oren Boiman & all, 2008) is to compare the feature vector of the input object and the feature vectors stored in the database (references class). The appropriate class is found by measuring the distance between a feature vector of input object and feature vectors of images in reference database. Several distances can be used to measure the similarity. In this paper, the Euclidean distance is used.

Top

2. Features Extraction

After the acquisition step, the features extraction is applied to compute the attributes (features) of the object. It is used to transform the object to a vector (witch stored the characteristics of the object). This transform reduce the dimensionality, the storage memory and the computing time. The object features must be invariant to rotation, translation and scale change. Zernike moments (Chao Kan & all, 2002), Hu moments (R. EL Ayachi & all, 2012), Gist descriptors (M. Douze & all, 2009) and color descriptor (statistical moments) (R.Venkata & all, 2012, Parag Dhonde & all, 2015, A. Eleyan & all, 2011) are used in this work.

2.1. Color Descriptor

The color is one of the first used Visual descriptors. There are several approaches to extract color information from a color image, among of these descriptors is the statistical moments.

The histogram method adopts the full color distribution, the stored data cause a loss of time and memory, to solve this problem, instead of computing the full distribution, we can calculate only the dominant color characteristics such as expectation and variance.

For each component (RGB):

  • Expectation is defined by:

    JECO.2015040105.m01
    (1)

  • The variance is calculated as follows:

    JECO.2015040105.m02
    JECO.2015040105.m03
    (2)

Complete Article List

Search this Journal:
Reset
Volume 22: 1 Issue (2024)
Volume 21: 1 Issue (2023)
Volume 20: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 19: 4 Issues (2021)
Volume 18: 4 Issues (2020)
Volume 17: 4 Issues (2019)
Volume 16: 4 Issues (2018)
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing