Ruler Detection for Autoscaling Forensic Images

Ruler Detection for Autoscaling Forensic Images

Abhir Bhalerao (Department of Computer Science, University of Warwick, Warwick, Coventry, UK) and Gregory Reynolds (Pattern Analytics Research Ltd, Solihull, UK)
Copyright: © 2014 |Pages: 19
DOI: 10.4018/ijdcf.2014010102
OnDemand PDF Download:
No Current Special Offers


The assessment of forensic photographs often requires the calibration of the resolution of the image so that accurate measurements can be taken of crime-scene exhibits or latent marks. In the case of latent marks, such as fingerprints, image calibration to a given dots-per-inch is a necessary step for image segmentation, preprocessing, extraction of feature minutiae and subsequent fingerprint matching. To enable scaling, such photographs are taken with forensic rulers in the frame so that image pixel distances can be converted to standard measurement units (metric or imperial). In forensic bureaus, this is commonly achieved by manual selection of two or more points on the ruler within the image, and entering the units of the measure distance. The process can be laborious and inaccurate, especially when the ruler graduations are indistinct because of poor contrast, noise or insufficient resolution. Here the authors present a fully automated method for detecting and estimating the direction and graduation spacing of rulers in forensic photographs. The method detects the location of the ruler in the image and then uses spectral analysis to estimate the direction and wavelength of the ruler graduations. The authors detail the steps of the algorithm and demonstrate the accuracy of the estimation on both a calibrated set of test images and a wide collection of good and poor quality crime-scene images. The method is shown to be fast and accurate and has wider application in other imaging disciplines, such as radiography, archaeology and surveying.
Article Preview


In forensic science, many high resolution digital images of crime scenes are often taken with reference to only rulers or scales placed within the frame of the image. The graduation spacing on the calibrated rulers (in centimetre and millimetre spacing, or in inches) are then subsequently used by forensic officers to determine the size of the crime-scene marks. Fingerprint marks and ballistic marks require precise measurement so that the image features can be used to match marks to criminal records, or otherwise be quantified. This is achieved by manually picking locations of one or more graduation marks to determine the scale. Although it may seem to be easy, it can be time-consuming and is error prone, especially when the image resolution is insufficient to accurately identify the graduation spacing. Consider for example a photograph with a field of view of 0.5m taken with 10M pixel camera, it will only result in about 5 pixels per mm of spatial resolution. Even when pixel resolution is not a problem, such as when a flat-bed scanner is used, the precise calibration of the scanner still remains, see for example Poliakow et al. (2007).

The general problem of determining size of objects from rulers or scales within the image is common in other disciplines, such as museum archiving, archaeology and medical imaging. Typically, the ruler graduation marks are sought by combining line filtering and line segment grouping. Commonly used approaches are by Hough transform grouping, Illingworth and Kittler (1988), eigen value analysis Guru et al. (2004), or direct image gradient analysis as proposed in Nelson (1994). Herrmann et al. presented a system for measuring the sizes of ancient coins by detecting the graduations on a ruler in the image, Herrmann et al. (2009); Zambanini and Kampel (2009). Their method used a Fourier transform of the entire image to filter the input image, thus suppressing the image of the coin, to leave the ruler graduation marks. They used a simple method to track along the ridges. Their method is relatively primitive and will only work on simple plain backgrounds when the ruler is presented either horizontally or vertically. They report an accuracy of about 1% in the detection of the graduation marks and a corresponding average error of about 1.19% in the diameter of the measured coins.

Poliakow et al. (2007) reported on a detailed analysis of the problem of calibration of commodity flat-bed scanners for the purpose of digitising large numbers of astronomical plates. Their solution uses a pair of graduated, photolithographically etched glass rulers which are placed along with the item to be scanned (the photographic plate). The rulers appear in the scanned image and can then be used to calibrate each image. They do not elaborate on any automated method to detect the ruler graduations however.

Ruler detection was used by Gooßen et al. (2008) and Supakul et al. (2012) to automatically stitch together overlapping radiographs. They present a four stage ruler recognition method that uses a Radon transform to find the orientation of the ruler. The rulers present in the images act as specialised synthetic landmarks and the method compliments techniques which use anatomical landmarks for registration, e.g. Johnson and Christensen (2002). They then proceed by projecting the ruler graduations onto the ruler line and then autocorrelating this with a template ruler. Their method works by matching a template ruler (and its graduations) to the given image and using optical character recognition to find the graduation numbers. This process allows them to register a number of radiographs with an accuracy of 3mm or less (testing over 2000 image pairs). Lahfi et al. (2003) required the precise location and measurement of a specially designed plastic ruler which was placed in the field of a digital angiographic image during intervention, see also Raji et al. (2002). They used a correlation approach to match a template ruler to the image to enable the precise augmentation of a pre-operative segmented image.

Complete Article List

Search this Journal:
Volume 13: 6 Issues (2021): 3 Released, 3 Forthcoming
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing