Moving Object Classification in a Video Sequence Using Invariant Feature Extraction

Moving Object Classification in a Video Sequence Using Invariant Feature Extraction

S. Vasavi, Reshma Shaik, Sahithi Yarlagadda
DOI: 10.4018/978-1-5225-2848-7.ch012
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Object recognition and classification (human beings, animals, buildings, vehicles) has become important in a surveillance video situated at prominent areas such as airports, banks, military installations etc., Outdoor environments are more challenging for moving object classification because of incomplete appearance details of moving objects due to occlusions and large distance between the camera and moving objects. As such, there is a need to monitor and classify the moving objects by considering the challenges of video in the real time. Training the classifiers using feature based is easier and faster than pixel-based approaches in object classification. Extraction of a set of features from the object of interest is most important for classification. Textural features, color features and structural features can be chosen for classifying the object. But in real time video, object poses are not always the same. Zernike moments have been shown to be rotation invariant and noise robust due to Orthogonality property.
Chapter Preview
Top

1. Introduction

Object recognition and classification (human beings, animals, buildings, vehicles) has become important in a surveillance video situated at prominent areas such as airports, banks, military installations etc., Outdoor environments are more challenging for moving object classification because of incomplete appearance details of moving objects due to occlusions and large distance between the camera and moving objects. As such, there is a need to monitor and classify the moving objects by considering the challenges of video in the real time. Training the classifiers using feature based is easier and faster than pixel-based approaches in object classification. Extraction of a set of features from the object of interest is most important for classification. Textural features, color features and structural features can be chosen for classifying the object. But in real time video, object poses are not always the same. Zernike moments have been shown to be rotation invariant and noise robust due to Orthogonality property.

1.1. Motivation

Visual surveillance and monitoring moving objects is required in many prominent and public areas such as banks, railway stations, airports, buses and military applications. Data that is collected from these Surveillance cameras have to be monitored either manually or using intelligent systems. Human operators monitoring manually for long durations is infeasible due to monotony and fatigue. As such, recorded videos are inspected when any suspicious event is notified. But this method only helps for recovery and does not avoid any unwanted events. “Intelligent” video surveillance systems can be used to identify various events and to notify concerned personal when any unwanted event is identified. As a result, such a system requires algorithms that are fast, robust and reliable during various phases such as detection, tracking, classification etc. This can be done by implementing a fast and efficient technique to classify the objects that are present in the video in the real time.

1.2. Problem Statement

Basic video analysis operations such as object detection, classification and tracking require scanning the entire video. But this is a time-consuming process and hence we require a method to detect and classify the objects that are present in the frames extracted from a real time video. Moments are used to classify objects present in a frame. Various forms of moment descriptors such as Moment Invariants, Geometric Moments, Rotational Moments, Orthogonal Moments, and Complex Moments have been extensively employed as pattern features in scene recognition, registration, object matching as well as data compression. Zernike moments have been shown to be superior to the others in terms of their insensitivity to image noise, information content, and ability to provide faithful image representation. In this chapter, Zernike moments based moving object classification is performed. Zernike moment has the rotation invariance property only, translation and scaling invariance should be achieved before applying the extraction features set. Zernike moment’s method has less computational complexity than geometric moment’s method. Zernike Moments is a feature extraction method from an image by which we can extract global features like amplitude and angle.

Complete Chapter List

Search this Book:
Reset