Calligraphic Video: Using the Body’s Intuition of Matter

Calligraphic Video: Using the Body’s Intuition of Matter

Sha Xin Wei
DOI: 10.4018/978-1-4666-0285-4.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Since 1984, Graphical User Interfaces have typically relied on visual icons that mimic physical objects like the folder, button, and trash can, or canonical geometric elements like menus, and spreadsheet cells. GUI’s leverage our intuition about the physical environment. But the world can be thought of as being made of stuff as well as things. Making interfaces from this point of view requires a way to simulate the physics of stuff in realtime response to continuous gesture, driven by behavior logic that can be understood by the user and the designer. The author argues for leveraging the corporeal intuition that people learn from birth about heat flow, water, smoke, to develop interfaces at the density of matter that leverage in turn the state of the art in computational physics.
Chapter Preview
Top

An Approach Based On Kinesthetic Intuition

In 2001, I built with colleagues1 a responsive environment in which wireless sensors beamed accelerometer data to OpenGL textures that were mapped onto a polygonal mesh. This textured mesh was projected onto the floor from a height of 20 feet, which produced moving “wings” registered to body of the participant (see Figure 1).

Figure 1.

Participants in the TGarden:TG2001 responsive environment, under projected OpenGL meshes that morph according to their movement. Ars Electronica 2001.

978-1-4666-0285-4.ch005.f01

The mesh width varied according not to some programmed clock but to a function of the actual instantaneous movement of the participant’s body. Despite the crude graphics, jumping on the hard floor onto which this responsive mesh was projected, one felt as if one were jumping on an elastic rubber sheet. Sha concluded that what gave such a strong sense of elasticity to the projected mesh was the nearly zero latency synchrony of the mesh’s grid size with the vertical displacement of the participant’s body. This motivated the strategy of using semantically shallow models that do not infer the cognitive, intentional or emotional state of the participant, but instead simulate the physics driving the graphics animation, with perceptibly negligible latency. A major limitation of that early work was the coarse resolution of the 3D geometry that we could render and drive in realtime from sensor data. In subsequent work, strategically forgoing 3D graphics freed up computational overhead to compute and present much richer 2D textures.

Normal mapping, for example, uses pre-computed perturbed normals from a bump map stored within a texture. The advantage of this method is that it adds extra detail that reacts to light without altering geometry, very useful for adding small details such as reliefs on a textured surface. For example, Cohen (1998) shows how a complex mesh can have the number of polygons reduced while maintaining detail.

If we were to take this simple technique divorcing it from its 3D context and apply it to a 2D surface covering the screen then the geometry would no longer be an issue. The detail would be restricted to the size of the source texture that can provide the richest amounts of detail possible as it reaches the limit case of the number of on-screen pixels.

Complete Chapter List

Search this Book:
Reset