Camera Calibration with 1D Objects

Camera Calibration with 1D Objects

José Alexandre de França (Universidade Estadual de Londrina, Brazil), Marcelo Ricardo Stemmer (Universidade Federal de Santa Catarina, Brazil), Maria B. de Morais França (Universidade Estadual de Londrina, Brazil) and Rodrigo H. Cunha Palácios (Universidade Tecnológica Federal do Paraná, Brazil)
DOI: 10.4018/978-1-61350-429-1.ch005


Camera calibration is a process that allows to fully understand how the camera forms the image. It is necessary especially when 3D information of the scene must be known. Calibration can be performed using a 1D pattern (points on a straight line). This kind of pattern has the advantage of being “visible” simultaneously even by cameras in opposite positions from each other. This makes the technique suitable for calibration of multiple cameras. Unfortunately, the calibration with 1D patterns often leads to poorly accurate results. In this work, the methods of single and multi-camera calibration are analyzed. It is shown that, in some cases, the accuracy of this type of algorithm can be significantly improved by simply performing a normalization of coordinates of the input points. Experiments on synthetic and real images are used to analyze the accuracy of the discussed methods.
Chapter Preview

1. Introduction

Mathematically, in the process of image creation, the camera accomplishes a mapping between a 3D space (the world environment) and a plane (the image plane). During this process, some informations are lost (e.g., angles, distances and volume). If these informations are needed, it becomes necessary to estimate the intrinsic and extrinsic camera parameters, i.e., matrices with special properties that represent the camera mapping, through a procedure known as calibration. Usually, during this procedure, the camera captures images from an object with well known dimensions and form (known as the calibration apparatus or calibration pattern). Afterwards, the relation between some points of the calibration pattern and their respective projections in the image plane is used to determine the camera parameters.

The first calibration algorithms to become widely used were based on 3D patterns (Lenz and Tsai, 1988; Tsai, 1987). Typically, such calibration objects are composed of two or more orthogonal planes with a well-known pattern on their faces. These methods have the advantage of performing the calibration with few images and have excellent accuracy. Over the years, new calibration methods have been proposed using 2D patterns (Sturm and Maybank, 1999; Zhang, 2000). In this case, the main advantages are the simplicity of the calibration apparatus (even a sheet of paper with a known pattern can be used) and the abundance of planes in man-made environments (which enables the use of some pre-existing pattern in the camera environment as a calibration apparatus). In fact, the abundance and ease of detection of planes in the environment led to the proposition of self-calibration algorithms based on planes (Triggs, 1998). In self-calibration, there is no need for a calibration apparatus. Instead, the camera performs some displacements while capturing images. Then, typically, it is enough to trace down a few points over these images to be able to perform the calibration. However, despite the convenience and abundance of already proposed self-calibration algorithms (Dornaika and Chung, 2001; Hartley, 1997b; Maybank and Faugeras, 1992; Mendonça and Cipolla, 1999), the self-calibration is still rarely used in practice. This is mainly due to the large number of variables that need to be estimated, which leads to inaccurate algorithms and high computational complexity.

Complete Chapter List

Search this Book: