A Biologically Inspired Saliency Priority Extraction Using Bayesian Framework

A Biologically Inspired Saliency Priority Extraction Using Bayesian Framework

Jila Hosseinkhani, Chris Joslin
DOI: 10.4018/IJMDEM.2019040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this article, the authors used saliency detection for video streaming problem to be able to transmit regions of video frames in a ranked manner based on their importance. The authors designed an empirically-based study to investigate bottom-up features to achieve a ranking system stating the saliency priority. We introduced a gradual saliency detection model using a Bayesian framework for static scenes under conditions that we had no cognitive bias. To extract color saliency, we used a new feature contrast in Lab color space as well as a k-nearest neighbor search based on k-d tree search technique to assign a ranking system into different colors according to our empirical study. To find the salient textured regions we employed contrast-based Gabor energy features and then we added a new feature as intensity variance map. We merged different feature maps and classified saliency maps using a Naive Bayesian Network to prioritize the saliency across a frame. The main goal of this work is to create the ability to assign a saliency priority for the entirety of a video frame rather than simply extracting a salient area which is widely performed.
Article Preview
Top

Introduction

During the last few years, video streaming demand has gone through an incredible increase due to the immense expansion of multimedia communications. To tackle the resulting congestion of the enormous volume of data over the networks, more bandwidth, and reliable communications are required. This has led to a considerable amount of research within the field of video streaming, video compression, Quality of Service (QoS), and real-time traffic supporting. To overcome the video streaming problem from a network traffic perspective, we decided to utilize a semantic video analysis mechanism known as a saliency detection technique and we introduced gradual saliency concept. Regions of interest or salient areas in an image or video play an important role in the semantic analysis of visual data. Saliency detection is widely exploited in many applications such as content-based image/video retrieval, scene understanding, video surveillance, video summarization, event detection, and image/video compression. The visual attention system includes the procedure of selecting the significant and interesting areas across visual information that humans receive in daily life.

We introduced the gradual saliency concept to distinguish different classes of saliency in a video frame. In this way, the most important and informative regions, i.e., salient regions are extracted to reduce the volume of video data in transmission. Our main goal is to provide a guidance for the encoder in order to decide what information need to be dropped and what information should form different video coding layers.

The Human Visual System (HVS) can process this information rapidly and focus on the distinct parts of a scene. Studies indicated that the effective factors on visual attention and eye movements are categorized into bottom-up and top-down categories (Healey et al., 2012; Duncan et al., 2012). Bottom-up factors capture unconscious attention very quickly and have a strong impact on the human visual selection system. On the other hand, top-down factors capture the attention much slower and require previous knowledge about the scene. Saliency detection models or Visual Attention Models (VAMs) employ bottom-up and/or top-down factors to search for the salient part of visual data. Bottom-up based models use low-level features such as color, texture, size, contrast, brightness, position, motion, orientation, and shape of objects (Duncan et al., 2012). However, top-down based models exploit high-level context-dependent features such as a face, human, animal, vehicle, text, etc. Both bottom-up and top-down factors can be exploited to design VAMs but because of the complexity and time limitation, few hybrid approaches have been proposed that use both factors. For a real-time video streaming application, it is essential to achieve a fast and simple saliency detection method with the aim of effectively controlling the network traffic. Therefore, we decided to avoid any cognitive bias in designing our model in order to speed up the procedure of saliency detection. Because considering cognitive bias requires machine learning based algorithms which are time consuming.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing