FPGA-Based Object Detection and Motion Tracking in Micro- and Nanorobotics

FPGA-Based Object Detection and Motion Tracking in Micro- and Nanorobotics

Claas Diederichs, Sergej Fatikow
Copyright: © 2014 |Pages: 11
DOI: 10.4018/978-1-4666-5125-8.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Object-detection and classification is a key task in micro- and nanohandling. The microscopic imaging is often the only available sensing technique to detect information about the positions and orientations of objects. FPGA-based image processing is superior to state of the art PC-based image processing in terms of achievable update rate, latency and jitter. A connected component labeling algorithm is presented and analyzed for its high speed object detection and classification feasibility. The features of connected components are discussed and analyzed for their feasibility with a single-pass connected component labeling approach, focused on principal component analysis-based features. It is shown that an FPGA implementation of the algorithm can be used for high-speed tool tracking as well as object classification inside optical microscopes. Furthermore, it is shown that an FPGA implementation of the algorithm can be used to detect and classify carbon-nanotubes (CNTs) during image acquisition in a scanning electron microscope, allowing fast object detection before the whole image is captured.
Chapter Preview
Top

Introduction

In micro- and nanohandling, the optical sensor is often the only way to obtain position information about tools and specimen. In microhandling, optical microscopes are used to observe e.g. a manufacturing process. In contrary, a scanning electron microscope (SEM) is used for nanohandling operations, as the resolution of optical systems is not sufficient for, these tasks. For both tasks, image processing is extensively used if handling processes are fully automated. Based on the image sensor, objects are detected and classified. Additionally, the image sensor is often used as a motion tracking system for closed-loop positioning of tools (e.g. grippers or tips); this is called visual servoing. Even if other position sensors are available for tools (e.g. internal position sensors of positioning systems), visual servoing of the end-effector is often required, e.g. to perform positioning relative to specimen. There are several reliable algorithms available for visual servoing (Sievers & Fatikow, 2006). However, the speed and accuracy of a closed-loop positioning system is constrained by the quality of the sensor used. The sensor quality is defined by characteristics such as resolution and noise, but also by timing characteristics such as update rate, latency and update rate deviation (jitter). In terms of those timing characteristics, state of the art image processing techniques using off-the-shelf computers have several drawbacks (Diederichs, 2010):

  • The sensor's update rate is a limiting factor for the digital closed-loop control of a highly dynamic system. For vision-based sensor systems, the update rate is comparatively low, because a full image has to be acquired and transferred. Common USB- or FireWire-cameras have update rates of 15 to 30 Hz.

  • The latency of a sensor describes the age of a sensor value. With a high latency, the closed-loop control works with old data. Camera-based sensors have a high latency because an object position is calculated after a full image was captured from the camera. The latency of vision-based object tracking is usually at least one update interval.

  • Jitter is time variation in a periodic signal (e.g. update rate), adding an uncertainty for closed-loop control. Jitter is a main problem in software-based object tracking on general purpose CPUs because of the unpredictable scheduling of the operating system.

There are different approaches to overcome these limitations. First, to achieve higher update rates, high-speed cameras can be used. Modern high-speed cameras have update rates up to 500 Hz. However, even on up-to-date computer systems, most state of the art algorithms can reliably manage update rates only up to 100 MP/s (MegaPixel per second, e.g. 100 Hz for a one mega pixel image) (Diederichs et al., 2012). Higher update rates can be achieved if a region of interest (ROI) is used for image processing.

To reduce the latency, it is possible to start the image processing on partial data before the full image is available in the memory. This is possible if special frame-grabbers are used. If multiple steps need to be performed for tracking, usually every step needs the full image in the memory and writes a new image to the memory. The latency can be further reduced if all steps could work on partial data. However, the algorithms need to be customized, as most available image-processing libraries (e.g. OpenCV, Matrox imaging library) assume that the image is fully available. For image processing algorithms that need random pixel access, working on partial data is not possible.

Jitter can be reduced by using real-time operating systems. Even with a real-time operating system jitter will occur, unless the image-processing task has the highest priority. Additionally, the runtime of several algorithms is not constant but dependent on image features; for instance a flood-fill algorithm performs faster on images that consist only of background than it does on images with several scattered objects.

All three timing challenges can be handled using hardware-based image processing. Field programmable gate arrays (FPGAs) are often used for embedded systems with configurable hardware.

Complete Chapter List

Search this Book:
Reset