Non-Manual Control Devices: Direct Brain-Computer Interaction

Non-Manual Control Devices: Direct Brain-Computer Interaction

Reinhold Scherer, Rajesh Rao
DOI: 10.4018/978-1-60566-206-0.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Brain-computer interface (BCI) technology augments the human capability to interact with the environment by directly linking the brain to artificial devices. The first generation of BCIs provided simple 1D control in order to select targets on a screen or trigger pre-defined motion sequences of paralyzed limbs by means of functional electrical stimulation. BCIs today can provide users on-demand access to assistive robotic devices, Virtual Reality environments, and standard software applications such as Internet browsers. Here, we introduce readers to BCIs and review basic principles and methodologies underlying their operation. We illustrate the capabilities and limitations of modern BCI systems by discussing two practical examples: BCI-based control of a humanoid robot for physical manipulation and transport of objects in an indoor environment, and BCI-based interaction with the popular global navigation program Google Earth.
Chapter Preview
Top

Background

Figure 1 illustrates the basic steps involved in direct brain-computer interaction. To establish closed-loop interaction, the user’s brain activity has to be first monitored and digitized (signal acquisition). Second, signal processing methods are applied to extract features of interest from the acquired signal and classification (or regression) methods are used to translate them into control commands for a device (pattern recognition and machine learning). The user perceives the initiated action and this sensory feedback closes the loop.

Figure 1.

Basic steps involved in direct brain-computer interaction

978-1-60566-206-0.ch015.f01

The following sections briefly review available recording technologies, brain signals used for operating BCIs, and commonly used pattern recognition and machine learning methods.

Key Terms in this Chapter

Feature Extraction: Transformation of input data into a set of features. Features are distinctive properties of input patterns that help in differentiating between the categories of input patterns.

Electroencephalogram (EEG): Bioelectrical brain activity recorded non-invasively from electrodes placed on the scalp.

P300 Event Related Potential (ERP): An ERP is an electrical potential shift in EEG that is time-locked to a perceptual, cognitive, or motor event. A P300 ERP is an ERP that occurs about 300 ms after the user consciously attends to a changing visual stimulus.

Motor Imagery (MI): Mental imagination of movements.

Neuroprosthesis: Devices that can substitute for a motor, sensory or cognitive capability that has been lost or damaged due to an injury or disease.

Classification: Assignment of labels to input patterns based on their extracted features.

Brain-Computer Interface (BCI): Communication system able to accept commands directly from the human brain without any muscle activity.

Humanoid Robot: Autonomous robot whose design is based on the human body.

Event-Related Desynchronization (ERD) and Synchronization (ERS): Phenomena reflecting sensorimotor brain activity resulting in an amplitude decrease (ERD) or increase (ERS) of oscillatory components. ERD and ERS are stimulus locked but not phase-locked to an event.

Virtual Environment (VE): Computer-simulated environment mimicking the real (or an imaginary) world.

Complete Chapter List

Search this Book:
Reset