Using a Hands-Free System to Manage Common Devices in Constrained Conditions

Using a Hands-Free System to Manage Common Devices in Constrained Conditions

DOI: 10.4018/978-1-5225-0435-1.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As computing equipment become ubiquitous, a new set of interfacing devices need to be developed and properly adapted to the conditions where this equipment is to be used. Interacting with machines might present difficulties relative to the handiness of common interfacing devices, when wearing certain clothes, doing certain dirty jobs, or when they are used by people with accessibility needs. In the last decades a new set of input devices were made available, including 3D sensors, which allow machine interacting without the need of touching any device. This chapter presents two prototype solutions supported by one of this 3D sensors, the Leap Motion, to manage appliances and other devices in a building and for the picking and loading of vehicles in a warehouse. The first case is contextualized in the area of the IoT and load scheduling of appliances, as a decisive factor in reduction of the buildings' electrical costs. The second case is presented as a solution to integrate the distribution of fresh and frozen goods where workers use thick clothes/gloves to carry out their work.
Chapter Preview
Top

Introduction

Definitions of Human-Computer Interaction (HCI) are easy to find. The majority of those definitions say that Human–computer interaction involves the study, planning, design and uses of the interaction between users and computers. The HCI, term introduced more than three decades ago in works like (Card, Moran, & Newell, 1980; Carlisle, 1976), is often regarded as the intersection of computer science, behavioral sciences, design, media studies, among other fields of study. It is therefore quite easy to recognize that the boundaries of what is HCI are relatively fuzzy, including a wide-range of practice fields. Historically, in a non-exhaustive way, HCI evolved from sets of switches, punched cards, monitors, keyboards, mouse pointers, touch-screens, motion detection devices, to bionic interfaces. HCI is changing fast. Dialogs between the computer HAL 9000 and humans in the 1968 film “2001: A Space Odyssey” (Kubrick, 1968) or the ones with the artificially intelligent, Vox 114, library host hologram in “Time Machine” movie, which communicates and interacts naturally with a time traveler (Wells, 2002), the invasive neural interfaces presented in “The Matrix” (Wachowski & Wachowski, 1999) or many of the interacting technologies presented in “Minority Report” (Spielberg, 2002) were once science fiction, but in the global sense not any more.

The truth is that, as Jonathan Grudin from the Microsoft Corporation claims, HCI is a moving target (Grudin, 2012). The future of HCI is expected to be supported on the ubiquitous and continuous presence of devices where computers communicate to give universal access to data and computational services. The future user expects high functional systems where, among other things, accessing those functionalities is natural, with mass availability of computer graphics, high-bandwidth interaction, and wide variety of displays (e.g., on common surfaces, with flexibility, large and thin).

Most computers and mobile devices have already the computational capacity and are equipped with devices to mimic humans’ senses like sight and hearing. With the appropriate sensors other “feelings” can be easily achieved like temperature, taste, smell, touch, balance or measure acceleration (Bhowmik, 2014). The input devices of these computers can be used to control machines in a natural and intuitive way, giving a new dimension to the traditional users’ interfaces.

There is an extensive list of works that study “human senses” devices to interact with computers. In Breen et al. (2014) a survey of the key concepts and underlying technologies is presented for voice and language understanding (including voice recognition, hardware optimization, speech synthesis, natural language understanding, etc.). A multimodal human-computer interaction focused on body, gesture, gaze, and affective interaction is presented in (Jaimes & Sebe, 2007). Senses like smell and taste are starting to be more effectively achieved. For instance, in (Villarreal & Gordillo, 2013) the results of a sensor model of aspiration and the design of a smell system device inspired in biological process is presented. In (Halder et al., 2012) the development of a polymer membrane is addressed, based in potentiometric taste sensors with efficient selectivity and sensitivity to mimic the mammalian tongue for measurement of basic tastes like saltiness, sourness, bitterness, sweetness and umami or savory.

Besides the usual keyboards/mouse or touchscreen devices, driven among others by the game industry, one of the more developed type of interactions are the ones based on non-touching gesture recognition. For this purpose, several types of sensors can be used, such as: embedded cameras or mobile 3D sensors, such as the Structure Sensor (Struture sensor, 2015), the Leap Motion (Leap Motion, 2015), the Kinetic (Kinect, 2015) or the zSense (Withana, Peiris, Samarasekara, & Nanayakkara, 2015).

Key Terms in this Chapter

Human-Computer Interaction: The study of interaction between users and computers, often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study.

Gesture Recognition: Computational interpretation of the gestures made by humans. The gestures are in general input commands to the computational system which can be recognize using technologies like accelerometers, gyroscopes, computational vision, or ultrasounds.

Vehicle Routing Problem: A combinatorial optimization and integer programming problem seeking to service a number of customers with a fleet of vehicles. Variants include the need to attaining certain restrictions as vehicles’ capacities’ and clients’ time windows.

Hands-Free Interaction System: System capable of accepting input commands from a user without the need to touch anything.

Load Scheduling: Manual, semi-automatic or automatic procedure used to schedule the electrical loads in infrastructure usually in order to achieve electrical or monetary savings.

Internet-Of-Things: Environment where everything on it is provided with a unique identifier and has the ability to transfer data (communicate) over a network. Many times associated with the ability of consumer electronics to communicate and receive orders over a network.

Leap Motion Sensor: Sensor which allows the interaction with digital content in virtual and augmented reality, allowing the tracking of the movement of hands and fingers with very low latency, converting them into 3D input, using a combination of software and hardware.

Complete Chapter List

Search this Book:
Reset