A computational model of non-visual spatial learning through virtual learning environment (VLE) is presented in this chapter. The inspiration has come from Landmark-Route-Survey (LRS) theory, the most accepted theory of spatial learning. An attempt has been made to combine the findings and methods from several disciplines including cognitive psychology, behavioral science and computer science (specifically virtual reality (VR) technology). The study of influencing factors on spatial learning and the potential of using cognitive maps in the modeling of spatial learning are described. Motivation to use VLE and its characteristics are also described briefly. Different types of locomotion interface to VLE with their constraints and benefits are discussed briefly. The authors believe that by incorporating perspectives from cognitive and experimental psychology to computer science, this chapter will appeal to a wide range of audience - particularly computer engineers concerned with assistive technologies, professionals interested in virtual environments, including computer engineers, architect, city-planner, cartographer, high-tech artists, and mobility trainers, and psychologists involved in the study of spatial cognition, cognitive behaviour, and human-computer interfaces.
About 314 million people are visually challenged worldwide; 45 million of them are blind. One out of every three blind people in the world lives in India - that comes to approximately 15 million. The inability to travel independently around and interact with the wider world is one of the most significant handicaps that can be caused by visual impairment or blindness, second only to the inability to communicate through reading and writing. The difficulties in the mobility of visually challenged people in new or unfamiliar locations are caused by the fact that spatial information is not fully available to them as against it being available to sighted people. Visually challenged people are thus handicapped to gather this crucial information, which leads to great difficulties in generating efficient cognitive maps of spaces and, therefore, in navigating efficiently within new or unfamiliar spaces. Consequently, many blind people become passive, depending on others for assistance. More than 30% of the blind do not ambulate independently outdoors (Clark-Carter, Heyes & Howarth, 1986; Lahav & Mioduser, 2003).
This constraint can be overcome by communicating spatial knowledge of the surroundings and thereby providing some means to generate cognitive mapping of spaces and of the possible paths for navigating through these spaces virtually, which are essential for the development of efficient orientation and mobility skills. It is obvious that reasonable number of repeated visits to the new space leads to formation of its cognitive map subconsciously. Thus, a good number of researchers focused on using technology to simulate visits to a new space for building cognitive maps. It need not be emphasized that the strength and efficiency of cognitive map building process is directly proportional to the closeness between the simulated and real-life environments. However, most of the simulated environments reported by earlier researchers don’t fully represent reality. The challenge, therefore, is to enhance and enrich simulated environment so as to create a near real-life experience.
The fundamental goal of developing virtual learning environment for visually challenged people is to complement or replace sight by another modality. The visual information therefore needs to be simplified and transformed so as to allow its rendition through alternate sensory channels, usually auditory, haptic, or auditory-haptic. One of the methods to enhance and enrich simulated environment is to use virtual reality along with advanced technologies such as computer haptics, brain-computer interface (BCI), speech processing and sonification. Such technologies can be used to provide learning environment to visually challenged people to create cognitive maps of unfamiliar areas. We aim to present various research studies including ours for communicating spatial knowledge to visually challenged people and evaluating it through virtual learning environment (VLE), and thereby enhancing spatial behaviour in real environment. This chapter proposes taxonomy of spatial learning and addresses the potential of virtual learning environment as a tool for studying spatial behaviour of visually challenged people and thereby enhancing their capabilities to interact in a spatial environment in real life. It would be useful to understand as to how they learn and acquire basic spatial knowledge in terms of landmarks and configuration of spatial layout and also how navigation tasks are improvised. Understanding the use of such knowledge to externalize and measure virtually perceived cognitive maps is also important.
Following questions are addressed in this chapter:
Does virtual learning environment (VLE) contribute to communicate the spatial knowledge and thereby the formation of a cognitive map of a novel space?
Which are the major factors that influence the spatial knowledge communication to visually challenged people through VLE?
Which are the factors that mediate for enhancement of the navigation performance of visually challenged people?
Is learning via VLE more effective, accurate, interesting, and enjoyable than learning via conventional methods?
How is the effectiveness of cognitive maps measured?
Can we consider trajectory of subjects as cognitive map?
Does the type of locomotion interface impinge on accuracy of spatial learning?
Is navigating through treadmill-style locomotion interface less disruptive than navigating via conventional devices?