Object Association through Multiple Camera Collaboration for Large-Scale Surveillance System

Object Association through Multiple Camera Collaboration for Large-Scale Surveillance System

Shung Han Cho (Stony Brook University-SUNY, USA), Kyung Hoon Kim (Stony Brook University-SUNY, USA), Yunyoung Nam (Stony Brook University-SUNY, USA) and Sangjin Hong (Stony Brook University-SUNY, USA)
DOI: 10.4018/978-1-61350-153-5.ch009
OnDemand PDF Download:
No Current Special Offers


In this chapter, we present an object association method through multiple camera collaboration for a large-scale surveillance system. The object association is achieved by locally generating homographic lines on targets in collaborating cameras. In order to maintain the object association with the insufficient separation between homographic lines due to densely populated objects, homographic points are generated in 3-D with estimated heights. The heights of targets are estimated by the linear least-squares using normal equations. The object association is confirmed by finding the pairs of the correspondences minimizing the distance between them. The proposed method is verified with real video sequences. The simulation result demonstrates that the proposed method is robust against false association because it considers all the possible pairing cases of occluded targets.
Chapter Preview


Recently, many researchers have much interest in a multiple camera based surveillance system (Cai, 1996; Calderara, 2005; Chang, 2000; Dockstader, 2001; Hu, 2004; Tsutsui, 2001; Utsumi, 1998). Various issues have been addressed such as optimal camera placements (Chakrabarty, 2002; Horster, 2006), calibration of multiple cameras (Lee, 2000; Senior, 2005), finding the correspondence of objects (Black, 2006; Caspi, 2006; Kang, 2003; Kelly,1995; Kyong, 2008; Li, 2002), camera handoff (Chen, 2008), tracking (Velipasalar, 2005; Tsutsui, 2001; Utsumi, 1998) etc. In a large-scale surveillance system, multiple cameras change their views flexibly for the maximized coverage and they have redundant information due to the overlapped views of multiple cameras. In order to reduce inconsistent and uncertain information, it is necessary to exchange target information for multiple camera collaboration. Moreover, the large-scale surveillance system may have the slow processing rates for multiple cameras, and it can affect the performance of objects tracking and association. The delayed processing due to the data traffic overload may create the failure of objects tracking and association. Especially, when a surveillance application needs to support wireless sensor networks, minimizing data traffic is a critical issue with multiple cameras. Therefore, efficient object association method is required for multiple camera collaboration.

Various methods are investigated for the multiple object association. They are classified into a feature based approach and a geometry based approach. Possible features used in the feature based approach are colors, histograms, heights and gaits. However, a system cannot have distinguishable features for all objects and extract them accurately. The performance of feature based approaches is severely affected by objects having similar features (Krumm, 2000; Mittal, 2003; Orwell, 1999). In geometric based approaches, virtual lines such as homographic lines or epipolar lines are often utilized. Khan (2003) proposes a method to find the correspondence of objects by using the boundaries of fields of view (FOVs). They can be automatically constructed on other cameras by using object movements. Black (2006) uses epipolar lines for the object association with known fundamental matrices. Also, Hu (2006) utilizes the principal axis of targets. The extracted principal axis of targets is transformed onto other cameras with the known homography matrix to find the correspondence of objects. However, these approaches require training to construct the boundaries of FOVs or known references to find the fundamental or homography matrix. These preprocesses restrict flexible camera movements for an autonomous surveillance system while the accurate transformation is guaranteed. Jaynes (2004) uses a network synchronization signal to calibrate multiple cameras from planar motion trajectories. The synchronization signal helps to reduce the number of trajectories that need to be compared by using the time constraint at observation points of the cameras. Fleuret (2008) also uses synchronized video streams to track people with multiple cameras by back-projecting detected objects in images into a discretized occupancy map. The method ignores the effect of synchronization issues that can frequently occurr in wireless sensor network. Kayumbi (2008) maps object trajectories with homography estimation accounting for the lens distortion. Sundaresan (2009) presents a method to track the articulated motion of humans using image sequences obtained from multiple cameras. The method focuses on the tracking of detailed human motion rather than multiple objects tracking for the surveillance system.

In order to support flexible camera movements, homographic lines are locally generated on targets in each camera and they are projected onto the other cameras (Kyong, 2008). The object association is established if projected homographic lines intersect with their corresponding targets. Because the reference ground plane is not used for transforming and projecting homographic lines, a common ground plane is not necessarily shown to all the cameras. However, the association performance is restricted by the insufficient separation of homographic lines and is degenerated for densely populated objects. In this case, the association process needs to rely on a feature based local tracking, which may lead to false or failed associations. Thus, an effective object association method is required to complement the limitation of the homographic lines based association method for a large-scale surveillance system.

Complete Chapter List

Search this Book: