Machine Learning for Detecting Scallops in AUV Benthic Images: Targeting False Positives

Machine Learning for Detecting Scallops in AUV Benthic Images: Targeting False Positives

Prasanna Kannappan (University of Delaware, USA), Herbert G. Tanner (University of Delaware, USA), Arthur C. Trembanis (University of Delaware, USA) and Justin H. Walker (University of Delaware, USA)
DOI: 10.4018/978-1-4666-9435-4.ch002
OnDemand PDF Download:


A large volume of image data, in the order of thousands to millions of images, can be generated by robotic marine surveys aimed at assessment of organism populations. Manual processing and annotation of individual images in such large datasets is not an attractive option. It would seem that computer vision and machine learning techniques can be used to automate this process, yet to this date, available automated detection and counting tools for scallops do not work well with noisy low-resolution images and are bound to produce very high false positive rates. In this chapter, we hone a recently developed method for automated scallop detection and counting for the purpose of drastically reducing its false positive rate. In the process, we compare the performance of two customized false positive filtering alternatives, histogram of gradients and weighted correlation template matching.
Chapter Preview


The 2011 Research Set-Aside project (Titled: “A Demonstration Sea Scallop Survey of the Federal Inshore Areas of the New York Bight using a Camera Mounted Autonomous Underwater Vehicle.”) was a proof-of-concept that successfully used a digital, rapid-fire camera integrated with a Gavia AUV (Figure 1(c)), to collect a continuous record of photographs for mosaicking, and subsequent scallop enumeration and size distribution assessment. In July 2011, data was collected over two separate five-day cruises (27 missions). Image transects were performed at depths of 25-50 m. The AUV continuously photographed the seafloor (see Figure 1(a)) along each transect at a constant altitude of 2 m above the seafloor. Spacing parallel sets of transects at 4 m gave excellent two-dimensional spatial resolution.

Figure 1.

(a) Seabed image with scallops shown in red circles; (b) Position of auv strobe light and camera; (c) Schematics of the Gavia AUV. (©2014, Kannappan et al., Used with permission).

The camera on the AUV was a Point Grey Scorpion model 20SO (for details on the camera specification, see (Kannappan et al., 2014)). It was mounted inside the nose module of the vehicle, with its strobe light near the center of the AUV (see Figure 1(b)) and a horizontal viewing angle of 44.65 degrees. The camera focus was manually fixed at 2 m and the resolution was at 800×600 pixels. Given the viewing angle and distance to the object being photographed, each image captured an area of 1.86×1.40 on the seafloor. Images were saved in JPEG format, with metadata that included position information (including latitude, longitude, depth, altitude, pitch, heading and roll). This information enabled manual annotation and counting of the number of scallops (Walker, 2013).

Complete Chapter List

Search this Book: