Article Preview
TopIntroduction
Video compression deals with the compression mechanism for the series of image sequences. In coding, the correlation between the adjacent image frames may get explored, as well as the relativity between them may also be used in the development of the compression mechanism. Generally, the adjacent image frame does not differ much. The probable difference lies in the displacement of the object in the given image frame with respect to the previous image frame. Grey and color videos (i.e. image sequences of grey or color image frame) are the customers for the video compression approach. Different color spaces can be used for the video images from processing point of view (Gonzalez et al., 2005; Moltredo et al., 1997; Zhang et al., 1995; Danciu et al., 1998; Hurtgen et al., 1994).
In the image sequences, there are two types of redundancies: Temporal redundancy and spatial redundancy. Spatial redundancy means redundancy among neighboring pixel in a frame. The coding technique which reduces the spatial redundancy is called as intra-frame coding. The intra-frame coding is similar to the individual image coding i.e. coding within the frame for finding out the redundancy in an image itself. Temporal redundancy means redundancy between two consecutive frames in a sequence of frames. The coding technique which reduces the temporal redundancy is called as inter-frame coding. Video compression deals with the compression mechanism for the series of image sequences. Block matching motion estimation (Zhu et al., 2009, 2009a; Zhu et al., 2010; Zhu et al., 2000; Cheung et al., 2000, 2002; Po et al., 1996; Liu et al., 1996; Ce Zhu et al., 2002; Li Hong-ye et al., 2009; Belloulata et al., 2011; Acharjee et al., 2012, 2013, 2014; Kamble et al., 2016, 2017) plays an important role in inter-frame coding technique to reduce temporal redundancy present in the series of image sequences.