Neural Semantic Video Analysis

Neural Semantic Video Analysis

Hamid Mohammadi, Tahereh Firoozi, Mark Gierl
Copyright: © 2025 |Pages: 15
DOI: 10.4018/978-1-6684-7366-5.ch068
Chapter PDF Download
Open access chapters are freely available for download

Abstract

Videos are a rich form of data intended for capturing, storing, and communicating information. The availability of inexpensive and accessible video-capturing sensors in smartphones, handheld cameras, and consumer security cameras has exponentially increased global video footage generation over the past decade. Since video is a popular form of widely consumed and produced data, it is essential to develop automated systems to analyze and identify relevant information within the large body of video material. This chapter demonstrates how the emergence of neural networks, including CNNs and transformers, has revolutionized semantic video analysis. Through convolutional filters, spatial patterns can be captured at the pixel level through this type of neural network. The learning capability of CNN-based models has been exceeded more recently by self-attention-based models. Both CNN-based and transformer-based semantic video analysis models take advantage of transfer learning, self-supervised learning, and more to compensate for the lack of large, supervised video datasets.
Chapter Preview
Top

Method

The core promise of machine learning is the automation and scaling of human intelligence. Neural networks, as the most powerful machine learning tool, are utilized to provide viable solutions for the challenges of SVA. Learning spatiotemporal patterns from sample videos enables neural SVA to recognize complex spatial and temporal patterns in noisy and long videos. Modern neural SVA architectures are capable of human-level performance in this regard. Furthermore, these models can be efficiently computed using modern graphical processing units (GPUs), so scalability is limited only by the available GPUs. Hence, neural networks are the preferred solution for SVA due to their accuracy and scalability.

For machine learning models to function properly, it is critical to understand how images are represented and stored on computers. RGB, short for red, green, and blue, is the most common format for representing images on computers. Typically, RGB images are stored as a matrix of pixel values in a 2D plane (or 3D, if RGB values are considered as dimensions). Each image is composed of pixels arranged in rows (across its height) and columns (across its width). The intensity of red, green, and blue colors is indicated by three values per pixel at each X (horizontal position) and Y (vertical position). Different combinations of red, green, and blue values produce a variety of colors. Essentially, videos are just extended versions of images. So, videos can be regarded as a succession of images across time. Video adds a third dimension to visual data by adding a temporal dimension. The added Z dimension indicates the temporal position of the pixel in the video. Accordingly, RGB values are assigned to a pixel in space and time with unique X, Y, and Z coordinates.

Complete Chapter List

Search this Book:
Reset