Article Preview
Top1. Introduction
The main process of video compression techniques is motion estimation, which is divided into two types: local and global motion (Adolph & Buschmann, 1991; Dufaux & Moscheni, 1995). Local motion describes the motion induced by the movement of objects in the scene. Global motion describes the motion caused by camera movements such as panning, tilting, rotation, and zooming. In this paper, we focus on global motion. Global motion estimation (GME) is a parametric motion model to describe and estimate the motion over the whole frame and generate the motion vector (Dufaux & Moscheni, 1995). GME has been added to recent MPEG-4 standard for video compression (Li et al., 2001). GME is considered a main process in field of object-based video applications such as video object segmentation, scene construction, and video coding.
In the recent years, parallel computations are largely used in many areas because they have been successful in achieving high computing performance. In video encoding, the idea of data partitioning is to reconstruct all frames into a number of data blocks and then map these blocks of data into the corresponding processors. The processors perform their computation in a parallel form. The parallel implementation of video encoding process increases the performance for real-time multimedia applications. Examples of models based on parallel implementations include open multi-processing (OpenMP), and message passing interface (MPI). These models also used to parallelize sequential processes to enhance the performance, while maintaining the same functionality of these processes. The OpenMP parallel programming model is an open source model for shared-memory multi-platform parallel programming in C, C++, and Fortran. For multicore architectures on shared memory, OpenMP is more suitable than other parallel programing models (OpenMP architecture review board, 2013).
There have been much research efforts in parallelizing different aspects of the modern video codecs (the codec performs coding/decoding). For example, He et al (1998) presented a scheme to parallelize the encoding processes, where each video object plan (VOP) was assigned to one group of workstations; the relationship between VOPs was synchronized using a petri nets model. The earliest deadline first (EDF) scheduling algorithm was used to allocate objects in a video session to workstations. Gunawan and Tong (2002) used a cluster computing development monitoring resource and MPI parallel programming model to improve the execution time of motion estimation. Rodriguez et al. (2004) proposed an evaluation of several parallel implementations of MPEG-4 encoder over clusters of workstations, using parallel data distribution methods. Wu and Megson (2006) proposed a parallel linear hash table motion estimation algorithm (LHMEA), LHMEA divided each reference frames into equally sized regions, these regions were processed in parallel.