Showing posts with label spatial cognition. Show all posts
Showing posts with label spatial cognition. Show all posts

Friday, 31 May 2013

Transcending the Digital Divide

The purpose of this research is to develop, evaluate, and disseminate a non-visual interface for accessing digital information. The aim is to investigate the perceptual and cognitive problems that blind people face when trying to interpret information provided in a multimodal manner. The project also plans to provide touch sensitive and sound based network interface and navigation devices that incorporate cognitive wayfinding heuristics. Haptic (force feedback) interfaces will be provided for exploring web pages that consist of map, graphic, iconic or image products. Sound identifiers for on-screen windowed, map, and image information will also be provided. These tasks will contribute to transcending the Digital Divide that increasingly separates blind or vision impaired people from the growing information-based workplace. Recent research at UCSB has begun to explore how individuals identify features presented through sound and touch. Other research (e.g. O'Modhrrain and Gillespie, 1998; McKinley and Scott, 1998) have used haptics to explore screen objects such as windows, pulldown menus, buttons, and sliders; but map, graphic and other cartographic representations have not been explored. In particular, the potential of auditory maps of on-screen phenomena (e.g. as would be important in GIS applications) has barely been examined and few examples exist of combining audio and touch principles to build an interface. While imaginative efforts to build non-visual interfaces have been proceeding. there is a yet little empirical evidence that people without sight can use them effectively (i.e. develop a true representation of the experienced phenomena). Experiments will be undertaken to test the ability of vision impaired and sighted people from different age groups to use these new interface or features such as: (i) the haptic mouse or a touch window tied to auditory communication displays; (ii) digitized real sounds to indicate environmental features at their mapped locations; (iii) "sound painting" of maps, images, or charts to indicate gradients of phenomena like temperature, precipitation, pressure, population density and altitude. Tests will be developed to evaluate (i) the minimum resolvable area for the haptic interpretation of scenes; (ii) the development of skills for shape tracing in the sound or the force-feedback haptic domain, (iii) the possibility of using continuous or discreet sound symbols associated with touch sensitive pads to learn hierarchically nested screen information (e.g. locations of cities within regions within states within nations); (iv) to evaluate how dynamic activities such as scrolling, zooming, and searching can be conducted in the haptic or auditory domain, (v) to evaluate people's comprehension and ability to explore, comprehend, and make inferences about various non-visual interpretations of complex visual displays (e.g. maps and diagrams), and (vi) to explore the effectiveness of using a haptic mouse with a 2" square motion domain to search a 14" screen (i.e. scale effects).

Off-Route Strategies in Non-Visual Navigation

The project addresses the effects of learning method on route comprehension of visually impaired people, and it will determine if changes in geographic scale alter the effectiveness of selected learning media. An understanding of how different methods of learning affect route comprehension will allow current spatial knowledge acquisition theory and orientation and mobility training to be assessed and, if necessary, improved. Traversing space is one of the most cognitively demanding tasks faced by visually impaired people, and often invokes fear of being lost or disorientated. For these reasons there is a need to identify effective strategies of spatial learning that can contribute to the mobility and quality of life of visually impaired people. In the first experiment 24 visually impaired people will learn three short routes across a University campus (in counterbalanced order). Each route will be learned using a different learning method. The 24 subjects will be divided into 4 groups who will learn the route in a different order. The 3 conditions will be (1) pointing to places along the route, (2) making a map of the route, and (3) verbally describing the route. A further (control) group of ten visually impaired subjects will learn the route without any given strategy. Each trial will be video recorded. The three strategies selected are "off-route" strategies. Participants' route learning performance will be measured in several ways: the number of trials required to achieve successful route learning; number of errors made; types of errors; self-reported confidence measures; and assessment by independent judges of performance, hesitancy, and confidence. In the second experiment, 16 participants will learn a route 1.4 miles long through a complex urban environment. Participants will be divided into two conditions. In the first condition, they will learn the route using the most successful strategy from Experiment 1. In the second condition, they will learn the route using no given strategy. Sample sizes in both experiments are relatively small due to the difficulty of recruiting visually impaired participants, but the number of participants and number of trials will be greater than in previous experiments of way-finding and therefore should provide definitive results. By collecting data in a small-scale (university campus) and a large-scale environment (suburban neighborhood) we may find that spatial knowledge acquisition focuses on different cognition tasks at different scales. For the development of an effective orientation and mobility training program, these tasks may be operationalized via one or more simple geographic-based environmental learning procedures. The research addresses important theoretical questions relating to spatial learning and cognition, providing further insights into how visually impaired people construct, store, and utilize spatial knowledge. In so doing, it will address practical issues relating to the improvement of current orientation and mobility training.

PUBLICATIONS

Blades, M., Lippa, Y., Golledge, R.G., Jacobson, R.D., and Kitchin, R.M. (2002) Wayfinding by people with visual impairments: The effect of spatial tasks on the ability to learn a novel route. Journal of Visual Impairment and Blindness 96, 407-419.
Link here

Jacobson, R.D., Lippa, Y., Golledge, R.G., Kitchin, R.M., and Blades, M. (2001) Rapid development of cognitive maps in people with visual impairments when exploring novel geographic spaces. IAPS Bulletin of People-Environment Studies (Special Issue on Environmental Cognition) 18, 3-6.
Link here

Golledge, R.G., Jacobson, R.D., Kitchin, R.M., and Blades, M. (2000). Cognitive maps, spatial abilities, and human wayfinding. Geographical Review of Japan, ser. B: The English journal of the Association of Japenese Geographers, 73 (Ser.B) (2), 93-104.
Link Here

PARTNERS

Department of Geography, University of California at Santa Barbarba, USA
Department of Psychology, University of California at Santa Barbarba, USA
Department of Geography, Florida State University, USA
Department of Psychology, University of Sheffield, UK
Department of Geography, National University of Maynooth, Ireland

Tuesday, 21 May 2013

Comparing Tactile Maps and Haptic Digital Representations of a Maritime Environment

Simonnet, M., Vieilledent, and Tisseau, J. (2011) Comparing Tactile Maps and Haptic Digital Representations of a Maritime Environment. Journal of Visual Impairment and Blindness, 105 (4), 222-234.

Abstract

A map exploration and representation exercise was conducted with participants who were totally blind. Representations of maritime environments were presented either with a tactile map or with a digital haptic virtual map. We assessed the knowledge of spatial configurations using a triangulation technique. The results revealed that both types of map learning were equivalent.


Non-Visual Geographies

Jacobson, R.D. (2010) Non-Visual Geographies In: Warf, B. (ed.) Encyclopedia of Geography, Sage: London.

Abstract

The construction, interpretation, and meaning of non-visual landscapes explores the role of sensory and perceptual modes other than vision in the construction of geographic space. It positions itself at the boundary between social theory and behavioral geography by examining the ways in which non-visual modes of information acquisition and processing reflect geographic environments and in turn shape those same places by structuring the subjective understanding and behavior of people and their symbolic understanding of space.  This understanding and representation of geographic space, occurs from several diverse conceptual perspectives, including behavioral geography and post-structuralism. At the individual level we gather
information in an environment, from all our senses other than vision: including hearing, smell, taste, and touch including kinesthesia (muscle memory). Our spatial behaviour is informed by these other sense modalities facilitating an understanding of space and place.  



Tuesday, 14 May 2013

Spatial cognition through tactile mapping

 Jacobson, R.D. (1992) Spatial cognition through tactile mapping. Swansea Geographer 29, 79-88.

Abstract
This paper describes an experiment to determine whether a tactile map of the University College of Swansea campus increases the spatial awareness of visually handicapped subjects.
View paper [PDF]