Reducing False Alarms in Vision-Based Fire Detection

Reducing False Alarms in Vision-Based Fire Detection

Neethidevan Veerapathiran (Mepco Schlenk Engineering College, India) and Anand S. (Mepco Schlenk Engineering College, India)
Copyright: © 2017 |Pages: 28
DOI: 10.4018/978-1-5225-1022-2.ch012
OnDemand PDF Download:
List Price: $37.50


Computer vision techniques are mainly used now a days to detect the fire. There are also many challenges in trying whether the region detected as fire is actually a fire this is perhaps mainly because the color of fire can range from red yellow to almost white. So fire region cannot be detected only by a single feature and many other features (i.e.) color have to be taken into consideration. Early warning and instantaneous responses are the preventing ideas to avoid losses affecting environment as well as human causalities. Conventional fire detection systems use physical sensors to detect fire. Chemical properties of particles in the air are acquired by sensors and are used by conventional fire detection systems to raise an alarm. However, this can also cause false alarms. In order to reduce false alarms of conventional fire detection systems, system make use of vision based fire detection system. This chapter discuss about the fundamentals of videos, various issues in processing video signals, various algorithms for video processing using vision techniques.
Chapter Preview


What Is Video?

It combines a sequence of images to form a moving picture (Abidha T E, Paul P Matha). It transmits a signal to a screen and processes the order in which the screen captures should be shown. It usually have audio components that correspond with the pictures being shown on the screen.

The Video is a series of still images displayed many times per second to give the illusion of motion. A frame is a single still picture that fills the display screen.

The frame rate for video is nothing but, how many full screens are displayed per second. It's measured in fps (Frames per second). Even though analog video isn't progressive, for purposes of comparison it's useful to use the same units. Interlaced video does also have a field rate, which will always be twice the frame rate. Film has a frame rate of 24fps, PAL uses 25fps, and NTSC 30/1.001fps. The strange number for NTSC is because of changes that were made from the original 30fps black and white signal so color could be added. It's commonly noted as 29.97fps, or occasionally 30fps. Sometimes NTSC and PAL signals are identified by a combination of their frame rate and type (interlaced or progressive) or field rate and type. 25i or 50i refers to PAL while 30i or 60i is generally used for NTSC. This is generally in reference to a digital signal with either NTSC or PAL characteristics. Film can be referred to as 24p.

Video Surveillance System

The main components of an automatic video surveillance system are shown in Fig. 1: video cameras are connected to a video processing unit (either a general purpose PC or a dedicated computer equipment), for instance, to extract high-level information identified with alert situation; this processing unit could be connected throughout a network to a control and visualization center that manages, for example, alerts; another important component is a video database and retrieval tool where selected video segments, video objects, and related contents can be stored and inquired (Aishy Amer, Concordia University, Montre´al, Que´bec, Canada, Carlo Regazzoni, University of Genoa).

Figure 1.

A generic block diagram of a video surveillance system

A video sequence is composed typically of a set of many video shots. To facilitate the processing of video objects, a video sequence is first segmented into video shots. A shot is a (finite) sequence of frames recorded contiguously from the same camera (usually without viewpoint change). In video surveillance, a shot represents a continuous, in time and space, action or event driven by moving objects (e.g., an object stopping at a restricted site). Video processing for surveillance applications aims at describing the data in successive frames of a video in terms of what is in the real scene, where it is located, when it occurred, and what are its features. It is the basic step towards an automated full understanding of the semantic contents in the input video. Semantically (perceptually) significant content, e.g., object activities, are generally related to moving objects. This is related to the human visual system (HVS) which is strongly attracted to moving objects creating luminance change.

A generic block diagram of a video object processing module for video surveillance is shown in Figure 1. As can be seen, several video processing steps are required to take out video objects and related high-level features: preprocessing (e.g., noise estimation or reduction), temporal segmentation (e.g., shot detection), video analysis (e.g., motion estimation, object segmentation, and tracking), object classification, video interpretation (i.e., extraction of high-level context-independent information), and video understanding (i.e., extraction of context-dependent semantically information). In a multi-level system such as in Figure 2 (or the sublevels of this figure such as video analysis with motion estimation, object segmentation, and tracking), the system architecture should be modular with special consideration to processing inaccuracies and errors of precedent levels. Processing errors need to be corrected or compensated at a higher level where more (higher level) information is available. Higher processing level can provide useful and more reliable information for the detection and correction of processing errors. Results of lower level processing are integrated to support higher processing. Higher levels may support lower levels through memory-based feedbacks.

Figure 2.

Multi level system

Complete Chapter List

Search this Book: