Communication and Text Entry by Gaze

Communication and Text Entry by Gaze

Päivi Majaranta (University of Tampere, Finland)
DOI: 10.4018/978-1-61350-098-9.ch008
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

There are several ways to write by gaze. In a typical setup, gaze direction is used to point and dwell-select letters on an on-screen keyboard. Alternatively, if the person cannot fixate, the eyes can be used as switches using blinks or rough gestures to select items. This chapter introduces different ways to enter text by gaze and reviews related research. We will discuss techniques to enhance text entry by gaze, such as word and letter prediction, and show how the possibility of adjusting the duration of the dwell time affects learning and typing speed. In addition, design issues such as keyboard layout and feedback are raised, with practical examples and guidelines that may aid in designing interfaces for gaze-based text entry.
Chapter Preview
Top

Introduction

Serious accident, disease, or a brainstem stroke may lead to a state wherein one is unable to move or talk. Often the eyes still function, even though the person is otherwise totally paralysed. In this situation, the movement of the eyes or the gaze direction can be used as means of communication. For example, the doctor may ask the patient to look up as a sign of agreement. One can also use a communication frame with pictures or letters attached to it. By looking at the letters, the disabled person can spell out words. The conversation partner interprets the eye movements into words and sentences. In Figure 1, a boy communicates with his mother. He looks at first the letter, then the corresponding colour of button. The grouping of the letters makes it easier for the helper to distinguish the targets from each other, so there is no need to separate targets that are near each other. Feedback is given by speaking out the letter. More examples of non-electronic communication aids are given by Goossens' and Crain (1987) or Scott (1998).

Figure 1.

The person on the other side of the E-TRAN gaze communication frame acts as a human eye tracker and interprets the gaze direction through the transparent board. A letter is chosen by first looking at it, then looking at the colour of button that corresponds to the colour of the letter. Downloadable gaze communication frame templates, along with instructions for making and using them, are available at the COGAIN website (Eye Gaze Communication Board, 2010). Photo © 2005 COGAIN. Used with permission

An advanced method of tracking the gaze direction is to use an eye tracking device that allows entering text and controlling a computer independently. An eye tracker, typically placed under the screen, follows the eye movements of a person looking at the screen. A computer program interprets the gaze direction and maps it to individual keys on the virtual keyboard. Gaze direction is used for pointing. The letter focused on can then be selected either via a separate switch (e.g., a key on the keyboard) or by staring at the letter with the focus, in which case dwell time is used for selection.

There are several ways to write by gaze. If the person cannot fixate, the eyes can be used as switches. In this case, rough glances up or down, or blinks, can then be used to select items. One can also write by gaze gestures; for example, a glance to the right and then down could form the letter ‘T’. Some systems, such as Dasher (Ward & MacKay, 2000), are based on a language model. Dasher can predict the letters the user wants to write next. The more probable letters are given more screen space than the less probable choices, which makes them easier and faster to hit by gaze. Different ways to enter text by gaze are introduced below. We will also discuss techniques to enhance text entry by gaze and show how the possibility of adjusting the duration of the dwell time affects learning and typing speed. In addition, we provide practical examples and guidelines that may aid in designing interfaces for gaze-based text entry. This chapter is largely adapted from the doctoral thesis by Majaranta (2009); the thesis is freely available in electronic form online (link listed in references) for any reader who wishes to access full details of the research summarised below.

Complete Chapter List

Search this Book:
Reset