Localization and Context Determination for Cyber-Physical Systems Based on 3D Imaging

Localization and Context Determination for Cyber-Physical Systems Based on 3D Imaging

Hannes Plank (Infineon Technologies Austria AG, Austria), Josef Steinbaeck (Infineon Technologies Austria AG, Austria), Norbert Druml (Independent Researcher, Austria), Christian Steger (Graz University of Technology, Austria) and Gerald Holweg (Infineon Technologies Austria AG, Austria)
Copyright: © 2018 |Pages: 26
DOI: 10.4018/978-1-5225-2845-6.ch001
OnDemand PDF Download:
No Current Special Offers


In recent years, consumer electronics became increasingly location and context-aware. Novel applications, such as augmented and virtual reality have high demands in precision, latency and update rate in their tracking solutions. 3D imaging systems have seen a rapid development in the past years. By enabling a manifold of systems to become location and context-aware, 3D imaging has the potential to become a part of everyone's daily life. In this chapter, we discuss 3D imaging technologies and their applications in localization, tracking and 3D context determination. Current technologies and key concepts are depicted and open issues are investigated. The novel concept of location-aware optical communication based on Time-of-Flight depth sensors is introduced. This communication method might close the gap between high performance tracking and localization. The chapter finally provides an outlook on future concepts and work-in progress technologies, which might introduce a new set of paradigms for location-aware cyber-physical systems in the Internet of Things.
Chapter Preview


3D imaging technologies have seen rapid developments in the past years. The introduction of the Microsoft Kinect depth sensor to the consumer market in 2010 triggered a massive research interest and effort. In 2016, the first mass-produced smartphone appeared, featuring depth sensing based on Time-of-Flight. The availability of such ubiquitous and miniaturized depth sensing solutions can tremendously help any kind of electronic device to sense and understand its environment.

A crucial part of operation for certain devices is localization. While depth sensors provide geometric information about the immediate surrounding, determining location and orientation within a certain coordinate system is a challenge of its own. This chapter explores the opportunities depth sensing systems provide to localization. A focus is set on applications in fields such as consumer electronics, internet of things and autonomous robots. Localization and tracking of electronic devices has a long history and has seen the use of a variety of different principles. This work focuses on the fields of high performance localization based on optical and sensor fusion solutions. Localization principles in general can be categorized into passive, guided and cooperative solutions.

A passive system is able to determine its position in a local or global coordinate system without external help. An increasing number of applications also require information about the orientation of the device. A combination of position and rotation sensing is often referred to as pose determination. A pose has of six degrees of freedom (dof) and completely describes the static position and orientation of an entity in 3D space. Each axle in 3D space presents one degree of freedom for the position and one degree for rotation around the axle. Passive 6-dof localization is often used in computer vision based positioning systems, where features are matched with prerecorded databases. Early examples are cruise missiles using terrain contours for navigation.

A well-known example for guided localization is GPS, where devices use the aid of satellites to determine their position. Cooperative localization solutions use a communication channel, which is often used for active identification and position forwarding. Optical tracking, using image sensors and active markers is an example for cooperative tracking. In such system, an external base-station with an image sensor can sense a tracked device equipped with active LED markers, and has the ability to toggle the LEDs for identification. Another example are beacon based systems, where active beacons forward information about their location.

When classifying the location-awareness of cyber-physical systems, it is important to distinguish between localization and tracking. While these terms are sometimes used ambiguously, tracking is commonly used in a relative context, where the registration of movements is important. Tracking the pose of a device does not always lead to the determination of a position within a meaningful coordinate system. However relative position and rotation changes can be detected. For certain applications, this is sufficient and no broader localization is required. Examples for such systems are instruments measuring rotations, such as gyroscopes or compasses, some 3D scanning solutions or human interface devices.

Localization is often associated with position determination without focus on detecting relative pose changes. A combination of tracking and localization is used in a lot of location-aware systems and leads to localization at a high update rate. Tracking and localization are often performed by different sensors, because localization solutions often lack of the desired accuracy to track relative pose changes. While localization can provide the position and orientation within a greater context, tracking sensors provide the accuracy and update-rate required for the application.

A great example of sensor fusion for localization and tracking is Wikitude (2016). This smartphone application provides augmented reality on smartphones. It annotates the video stream of the internal camera with meaningful information about the environment and displays it on the screen. GPS or WIFI is used for positioning. The absolute orientation is determined by the gravity and compass sensors. The gyro sensors are used to track movements to enable a high update rate of the rotation. This enables to robustly attach information to certain landmarks and directions in the smartphone video stream.

Complete Chapter List

Search this Book: