Showing posts with label interfaces. Show all posts
Showing posts with label interfaces. Show all posts

Friday, 31 May 2013

Transcending the Digital Divide

The purpose of this research is to develop, evaluate, and disseminate a non-visual interface for accessing digital information. The aim is to investigate the perceptual and cognitive problems that blind people face when trying to interpret information provided in a multimodal manner. The project also plans to provide touch sensitive and sound based network interface and navigation devices that incorporate cognitive wayfinding heuristics. Haptic (force feedback) interfaces will be provided for exploring web pages that consist of map, graphic, iconic or image products. Sound identifiers for on-screen windowed, map, and image information will also be provided. These tasks will contribute to transcending the Digital Divide that increasingly separates blind or vision impaired people from the growing information-based workplace. Recent research at UCSB has begun to explore how individuals identify features presented through sound and touch. Other research (e.g. O'Modhrrain and Gillespie, 1998; McKinley and Scott, 1998) have used haptics to explore screen objects such as windows, pulldown menus, buttons, and sliders; but map, graphic and other cartographic representations have not been explored. In particular, the potential of auditory maps of on-screen phenomena (e.g. as would be important in GIS applications) has barely been examined and few examples exist of combining audio and touch principles to build an interface. While imaginative efforts to build non-visual interfaces have been proceeding. there is a yet little empirical evidence that people without sight can use them effectively (i.e. develop a true representation of the experienced phenomena). Experiments will be undertaken to test the ability of vision impaired and sighted people from different age groups to use these new interface or features such as: (i) the haptic mouse or a touch window tied to auditory communication displays; (ii) digitized real sounds to indicate environmental features at their mapped locations; (iii) "sound painting" of maps, images, or charts to indicate gradients of phenomena like temperature, precipitation, pressure, population density and altitude. Tests will be developed to evaluate (i) the minimum resolvable area for the haptic interpretation of scenes; (ii) the development of skills for shape tracing in the sound or the force-feedback haptic domain, (iii) the possibility of using continuous or discreet sound symbols associated with touch sensitive pads to learn hierarchically nested screen information (e.g. locations of cities within regions within states within nations); (iv) to evaluate how dynamic activities such as scrolling, zooming, and searching can be conducted in the haptic or auditory domain, (v) to evaluate people's comprehension and ability to explore, comprehend, and make inferences about various non-visual interpretations of complex visual displays (e.g. maps and diagrams), and (vi) to explore the effectiveness of using a haptic mouse with a 2" square motion domain to search a 14" screen (i.e. scale effects).

Tuesday, 21 May 2013

Can Virtual Reality Provide Digital Maps To Blind Sailors? A Case Study

Jacobson, R.D., Simonnet, M., Vieilledent, S. and Tisseau, J. (2009) Can Virtual Reality Provide Digital Maps To Blind Sailors? A Case Study. Proceedings of the International Cartographic Congress, 15-21 November 2009, Santiago, Chile. 10pp.

Abstract
This paper presents information about “SeaTouch” a virtual haptic and auditory interface to digital Maritime Charts to facilitate blind sailors to prepare for ocean voyages, and ultimately to navigate autonomously while at sea. It has been shown that blind people mainly encode space relative to their body. But mastering space consists of coordinating body and environmental reference points. Tactile maps are powerful tools to help them to encode spatial information. However only digital charts an be updated during an ocean voyageand they very often the only alternative is through conventional printed media. Virtual reality can present information using auditory and haptic interfaces. Previous work has shown that virtual navigation facilitates the ability to acquire spatial knowledge. The construction of spatial representations from physical contact of individuals with their environment, the use of Euclidean geometry seems to facilitate mental processing about space. However, navigation takes great advantage of matching ego- and allo-centered spatial frames of
reference to move and locate in surroundings. Blindness does not indicate a lack of comprehension of spatial concepts, but it leads people to encounter difficulties in perceiving and updating information about the environment. Without access to distant landmarks that are available to people with sight, blind people tend to encode spatial relations in an ego-centered spatial frame of reference. On the contrary, tactile maps and appropriate exploration strategies allow them to build holistic configural representations in an allo-centered spatial frame of reference. However,  position updating during navigation remains particularly complicated without vision. Virtual reality techniques can provide a virtual environment to manage and explore their surroundings. Haptic and auditory interfaces provide blind people with an immersive virtual navigation experience. In order to help blind sailors to coordinate ego- and allo-centered spatial frames of reference, we conceived SeaTouch. This haptic and auditory software is adapted so that blind sailors are able to
set up and simulate their itineraries before sailing navigation. In our first experimental condition, we compare spatial representations built by six blind sailors during the exploration of a tactile map and the virtual map of SeaTouch. Results show that these two conditions were equivalent. In our second experimental condition, we focused on the conditions which favour the transfer of spatial knowledge from a virtual to a real environment. In this respect, blind sailors performed a virtual navigation in‘Northing mode’, where the ship moves on the map, and in‘Heading mode’, where the map shifts around the sailboat. No significant difference appears. This reveals that the most important factor for the blind sailors to locate themselves in the real environment is the orientation of the maps during the initial encoding time. However, we noticed that the subjects who got lost in the virtual environment in northing condition slightly improved their performances in the real environment. The analysis of the exploratory movements on the map are congruent with a previous model of coordination of spatial frames of reference. Moreover, beyond the direct benefits of SeaTouch for the navigation of blind sailors, this study offers some new insight to facilitate understanding of non visual spatial cognition. More specifically the cognitively complex task of the coordination and integration of ego and allocentered spatial frames of reference. In summary the research aims at measuring if a blind sailor can learn a maritime environment with a virtual map as well as with a tactile map. The results tend to confirm this, and suggest pursuing investigations with non visual virtual navigation. Here we present the initial results with
one participant.

[VIEW PDF]

Thursday, 16 May 2013

Representing Spatial Information Through Multimodal Interfaces: Overview and preliminary results in non-visual interfaces

Jacobson, R.D. (2002) Representing Spatial Information Through Multimodal Interfaces: Overview and preliminary results in non-visual interfaces.  6th International Conference on Information Visualization: Symposium on Spatial/Geographic Data Visualization, IEEE Proceedings, London, 10-12 July, 2002, 730-734.

Abstract

The research discussed here is a component of a larger study to explore the accessibility and usability of spatial data presented through multiple sensory modalities including haptic, auditory, and visual interfaces.  Geographical Information Systems (GIS) and other computer-based tools for spatial display predominantly use vision to communicate information to the user, as sight is the spatial sense par excellence. Ongoing research is exploring the fundamental concepts and techniques necessary to navigate through multimodal interfaces, which are user, task, domain, and interface specific. This highlights the necessity for both a conceptual / theoretical schema, and the need for extensive usability studies.  Preliminary results presented here exploring feature recognition, and shape tracing in non-visual environments indicate multimodal interfaces have a great deal of potential for facilitating access to spatial data for blind and visually impaired persons. The research is undertaken with the wider goals of increasing information accessibility and promoting “universal access”.  

[VIEW PDF]