Wearable and Ubiquitous Video Data Management for Computational Augmentation of Human Memory
Tatsuyuki Kawamura (Nara Institute of Science and Technology, Japan), Takahiro Ueoka (Nara Institute of Science and Technology, Japan), Yasuyuki Kono (Nara Institute of Science and Technology, Japan) and Masatsugu Kidode (Nara Institute of Science and Technology, Japan)
Copyright: © 2008
This chapter introduces video data management techniques for computational augmentation of human memory, i.e., augmented memory, on wearable and ubiquitous computers used in our everyday life. The ultimate goal of augmented memory is to enable users to conduct themselves using human memories and multimedia data seamlessly anywhere, anytime. In particular, a user’s viewpoint video is one of the most important triggers for recalling past events that have been experienced. We believe designing augmented memory system is a practical issue for real world-oriented video data management. This chapter also describes a framework for an augmented memory albuming system named Sceneful Augmented Remembrance Album (SARA). In the SARA framework, we have developed three modules for retrieving, editing, transporting, and exchanging augmented memory. Both the Residual Memory module and the I’m Here! module enable a wearer to retrieve video data that he/she wants to recall in the real world. The Ubiquitous Memories module is proposed for editing, transporting, and exchanging video data via real world objects. Lastly, we discuss future works for the proposed framework and modules.