Space Boards: Combining Tangible Interfaces with the Surrounding Space via RGB-D Cameras

Space Boards: Combining Tangible Interfaces with the Surrounding Space via RGB-D Cameras

Evan Shellshear
DOI: 10.4018/IJCICG.2016010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper introduces a novel take on a well-known user interface that combines the advantages of a number of new technologies. In particular, it presents a new tangible interface with an interactive surrounding space. It demonstrates the technology in an exciting user case as a printed keyboard and hand-gesture based mouse that provides one with an easy-to-use text and virtual mouse input for situations where such a medium is difficult to use (e.g. virtual keyboards for tablets and smartphones) or non-existent (e.g. gaming consoles such as the Microsoft Kinect). It also examines other applications and design questions that arise from such an interface.
Article Preview
Top

Introduction

In the last ten years there has been significant interest in tangible interfaces, (Signer & Norrie, 2010). Tangible interfaces are part of an important objective to allow users to interact with computers and their surroundings in a natural and intuitive fashion. This has been the goal of many research endeavors beginning with the original vision of ubiquitous computing by Mark Weiser, (Weiser, 1991). In particular, one field which has received much attention is the digital augmentation of paper and other surfaces, (Signer & Norrie, 2010). Paper and similar surfaces have always interested researchers due the cheap, natural and multifaceted user interface that paper affords, (Holman, Vertegaal, Altosaar, Troje, & Johns, 2005). This research is extended here by providing a novel combination of an interactive surface with the space surrounding it.

The presented interface exploits advances in consumer hardware and software. In particular, the focus is on using technology that is cheap and easily available to the average consumer to provide an interface that will be accessible to as many people as possible. The technology used here is based on the recent wave of cheap hardware that combine a depth and video camera in one device such as the Microsoft Kinect (Microsoft Kinect, 2014), Asus Xtion (Asus Xtion, 2014), Intel RealSense camera (Intel RealSense Camera, 2014), etc. These cameras are not only cheap but also provide the software developer with simpler code development due to the ease with which the two types of cameras can communicate with each other. These advantages are exploited here to provide a system that is able to track a tangible interface with the video camera but also track the user’s gestures and other objects with the depth camera. This setup grants us the ability to create novel and intuitive applications that have the potential to be easily understood and interacted with by the average consumer.

The interface presented here combines existing technologies in a novel way to also grant the user unprecedented levels of flexibility and customization. Again, following on the idea of creating an interactive surface that uses easily accessible technologies, one of the major virtues of the design is that the interface can be customized by the user and then printed out with a standard ink printer. This allows the user to have multiple interfaces for different languages, different use scenarios (work, gaming, etc) and different interaction paradigms (mouse touch pad, DJ deck, joystick, etc.). Due to the technology used, it is also possible for multiple users to interact simultaneously with the interface but still have their own personalized version of the technology.

In light of the previous discussion, the academic contributions of this paper are:

  • Presenting a novel, movable tangible interface utilizing commodity RGB-D cameras which can be used by multiple users with a single camera,

  • Extending previous work by using a RGB-D camera allowing one to move the tangible interface with a simple and easily printed marker which can be extended to pictures or simply the interactive elements themselves,

  • Solving the problem of image finger segmentation for tangible interfaces and thereby allowing a more natural usage of the device without requiring gloves or putting restrictions on the design of the interface,

  • Allowing user interaction with the space above the interface as well as on it by exploiting depth information.

Apart from these contributions, more novel uses of the results presented here can be found in the”Use Case Example” Section. In addition to these contributions, the code developed for this paper has been publicly published online under the MIT license so it can be used by other researchers to test and build on the code. The code repository can be found at (Shellshear, 2015).

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 2 Issues (2022): 1 Released, 1 Forthcoming
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing