Showing posts with label haptic. Show all posts
Showing posts with label haptic. Show all posts

Friday, 31 May 2013

Multimodal zooming in digital geographic information

As a basic research issue, how well can people integrate and reconcile spatial information from various modalities, and how useful is such integration?

As an applied issue, what is the potential for haptic and auditory navigation within geographic information systems? Can visual information be augmented by the presentation of information via other modalities, namely, haptics and audition, and if so, to what extent?

The research will investigate a particular form of navigation within geographic information systems, namely, zooming. The research aims to investigate non-visual methods of representing or augmenting a visual zoom through the auditory and haptic senses, creating a multimodal zooming mechanism.

Transcending the Digital Divide

The purpose of this research is to develop, evaluate, and disseminate a non-visual interface for accessing digital information. The aim is to investigate the perceptual and cognitive problems that blind people face when trying to interpret information provided in a multimodal manner. The project also plans to provide touch sensitive and sound based network interface and navigation devices that incorporate cognitive wayfinding heuristics. Haptic (force feedback) interfaces will be provided for exploring web pages that consist of map, graphic, iconic or image products. Sound identifiers for on-screen windowed, map, and image information will also be provided. These tasks will contribute to transcending the Digital Divide that increasingly separates blind or vision impaired people from the growing information-based workplace. Recent research at UCSB has begun to explore how individuals identify features presented through sound and touch. Other research (e.g. O'Modhrrain and Gillespie, 1998; McKinley and Scott, 1998) have used haptics to explore screen objects such as windows, pulldown menus, buttons, and sliders; but map, graphic and other cartographic representations have not been explored. In particular, the potential of auditory maps of on-screen phenomena (e.g. as would be important in GIS applications) has barely been examined and few examples exist of combining audio and touch principles to build an interface. While imaginative efforts to build non-visual interfaces have been proceeding. there is a yet little empirical evidence that people without sight can use them effectively (i.e. develop a true representation of the experienced phenomena). Experiments will be undertaken to test the ability of vision impaired and sighted people from different age groups to use these new interface or features such as: (i) the haptic mouse or a touch window tied to auditory communication displays; (ii) digitized real sounds to indicate environmental features at their mapped locations; (iii) "sound painting" of maps, images, or charts to indicate gradients of phenomena like temperature, precipitation, pressure, population density and altitude. Tests will be developed to evaluate (i) the minimum resolvable area for the haptic interpretation of scenes; (ii) the development of skills for shape tracing in the sound or the force-feedback haptic domain, (iii) the possibility of using continuous or discreet sound symbols associated with touch sensitive pads to learn hierarchically nested screen information (e.g. locations of cities within regions within states within nations); (iv) to evaluate how dynamic activities such as scrolling, zooming, and searching can be conducted in the haptic or auditory domain, (v) to evaluate people's comprehension and ability to explore, comprehend, and make inferences about various non-visual interpretations of complex visual displays (e.g. maps and diagrams), and (vi) to explore the effectiveness of using a haptic mouse with a 2" square motion domain to search a 14" screen (i.e. scale effects).

Tuesday, 21 May 2013

A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

M. Simonnet, R.D. Jacobson, S. Vieilledent and J. Tisseau. (2009) SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors. In K. Stewart Hornsby et al. (Eds.): COSIT 2009, LNCS 5756, pp. 212–226, 2009. Springer-Verlag Berlin, Heidelberg.

Abstract

Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.


Can Virtual Reality Provide Digital Maps To Blind Sailors? A Case Study

Jacobson, R.D., Simonnet, M., Vieilledent, S. and Tisseau, J. (2009) Can Virtual Reality Provide Digital Maps To Blind Sailors? A Case Study. Proceedings of the International Cartographic Congress, 15-21 November 2009, Santiago, Chile. 10pp.

Abstract
This paper presents information about “SeaTouch” a virtual haptic and auditory interface to digital Maritime Charts to facilitate blind sailors to prepare for ocean voyages, and ultimately to navigate autonomously while at sea. It has been shown that blind people mainly encode space relative to their body. But mastering space consists of coordinating body and environmental reference points. Tactile maps are powerful tools to help them to encode spatial information. However only digital charts an be updated during an ocean voyageand they very often the only alternative is through conventional printed media. Virtual reality can present information using auditory and haptic interfaces. Previous work has shown that virtual navigation facilitates the ability to acquire spatial knowledge. The construction of spatial representations from physical contact of individuals with their environment, the use of Euclidean geometry seems to facilitate mental processing about space. However, navigation takes great advantage of matching ego- and allo-centered spatial frames of
reference to move and locate in surroundings. Blindness does not indicate a lack of comprehension of spatial concepts, but it leads people to encounter difficulties in perceiving and updating information about the environment. Without access to distant landmarks that are available to people with sight, blind people tend to encode spatial relations in an ego-centered spatial frame of reference. On the contrary, tactile maps and appropriate exploration strategies allow them to build holistic configural representations in an allo-centered spatial frame of reference. However,  position updating during navigation remains particularly complicated without vision. Virtual reality techniques can provide a virtual environment to manage and explore their surroundings. Haptic and auditory interfaces provide blind people with an immersive virtual navigation experience. In order to help blind sailors to coordinate ego- and allo-centered spatial frames of reference, we conceived SeaTouch. This haptic and auditory software is adapted so that blind sailors are able to
set up and simulate their itineraries before sailing navigation. In our first experimental condition, we compare spatial representations built by six blind sailors during the exploration of a tactile map and the virtual map of SeaTouch. Results show that these two conditions were equivalent. In our second experimental condition, we focused on the conditions which favour the transfer of spatial knowledge from a virtual to a real environment. In this respect, blind sailors performed a virtual navigation in‘Northing mode’, where the ship moves on the map, and in‘Heading mode’, where the map shifts around the sailboat. No significant difference appears. This reveals that the most important factor for the blind sailors to locate themselves in the real environment is the orientation of the maps during the initial encoding time. However, we noticed that the subjects who got lost in the virtual environment in northing condition slightly improved their performances in the real environment. The analysis of the exploratory movements on the map are congruent with a previous model of coordination of spatial frames of reference. Moreover, beyond the direct benefits of SeaTouch for the navigation of blind sailors, this study offers some new insight to facilitate understanding of non visual spatial cognition. More specifically the cognitively complex task of the coordination and integration of ego and allocentered spatial frames of reference. In summary the research aims at measuring if a blind sailor can learn a maritime environment with a virtual map as well as with a tactile map. The results tend to confirm this, and suggest pursuing investigations with non visual virtual navigation. Here we present the initial results with
one participant.

[VIEW PDF]

Saturday, 18 May 2013

Multimodal Interfaces for Representing and Accessing Geospatial Information

Golledge, R.G., Rice, M., and Jacobson, R.D. (2006) Multimodal Interfaces for Representing and Accessing Geospatial Information. In: Rana, S. and Sharma, J. (eds.) Frontiers of Geographic Information Technology. Springer-Verlag: Berlin & New York, pp 181-208.

 Abstract

Multimodal interfaces have a great potential impact in our daily lives and in the education of students in all grades.  In particular, they offer significant benefits for people who are disabled.  The use of tactile, haptic, and auditory interfaces has a potential to make technology more universally accessible.  To this extent it will
mitigate the rapidly expanding digital divide between those who are able to use computers to access the Internet and web page information (i.e., those who  are computer literate) and those who are not.
Information technology transformations are affecting how we communicate, how we store and access information, how we become healthier and receive more medical care, how we learn at different stages of our development, how business is conducted, how work is undertaken in order to produce income, how things are built or designed, how data is stored and managed, and how research is conducted.  With the increasing emphasis on visualization as the main interface medium for computer based services, an ethical problem emerges regarding whether or not people who are visually impaired or who have other tactile, haptic, or auditory impairments should be increasingly disabled by the trend towards digital communication and information processing.  We believe that such groups should not be shut out from the advantages offered by the use of this technology, just as we believe that multimodal interfaces will enrich the understanding of the computer-based input and output of information that is becoming a part of our everyday lives. 

 
[VIEW PDF]

Friday, 17 May 2013

A Commentary on the Use of Touch for Accessing On-Screen Spatial Representations: The Process of Experiencing Haptic Maps and Graphics

Golledge, R.G., Rice, M., and Jacobson, R.D. (2005) A Commentary on the Use of Touch for Accessing On-Screen Spatial Representations: The Process of Experiencing Haptic Maps and Graphics. The Professional Geographer, 57 (3). 339-349.

Abstract

The growth of the Internet and the digital revolution have meant increased reliance on electronic representations of information. Geospatial information has been readily adapted to the world of cyberspace, and most Web pages incorporate graphics, images, or maps to represent spatial and spatialized data. But flat computer screens do not facilitate a map or graph experience by those who are visually impaired. The traditional method for compensating for nonvisual access to maps and graphics has been to construct hard-copy tactile maps. In this article, we examine an electronic accommodation for nonvisual users—the haptic map. Using new and off-the-shelf hardware—force feedback and vibrotactile mice—we explore how touch can be combined with virtual representations of shapes and patterns to enable nonvisual access to onscreen map or graphic material.
Key Words: digital representation, haptic maps, visual impairment

[VIEW PDF]

Haptic Soundscapes: Developing novel multi-sensory tools to promote access to geographic information

Jacobson, R.D. (2004) Haptic Soundscapes: Developing novel multi-sensory tools to promote access to geographic information. In: Janelle,D., Warf, B., and Hansen, K (eds.) WorldMinds: Geographical Perspectives on 100 problems. Kluwer: Dordrecht, pp 99-103.

Abstract

This essay explores the critical need for developing new tools to promote access to geographic information that have throughout history been conventionally represented by maps. This problem is especially acute for vision-impaired individuals. The need for new tools to access map-like information is driven by the changing nature of maps, from static paper-based products to digital representations that are interactive, dynamic, and
distributed across the Internet. This revolution in the content, display, and availability of geographic representations generates a significant problem and an opportunity. The problem is that for people without sight there is a wealth of information that is inaccessible due the visual nature of computer displays. At the same time the digital nature of geographic information provides an opportunity for making information accessible to non-visual users by presenting the information in different sensory modalities in computer interfaces, such as, speech, touch, sound, and haptics (computer generated devices that allow users to interact with and to feel information).

[VIEW PDF]
 

Thursday, 16 May 2013

Representing Spatial Information Through Multimodal Interfaces: Overview and preliminary results in non-visual interfaces

Jacobson, R.D. (2002) Representing Spatial Information Through Multimodal Interfaces: Overview and preliminary results in non-visual interfaces.  6th International Conference on Information Visualization: Symposium on Spatial/Geographic Data Visualization, IEEE Proceedings, London, 10-12 July, 2002, 730-734.

Abstract

The research discussed here is a component of a larger study to explore the accessibility and usability of spatial data presented through multiple sensory modalities including haptic, auditory, and visual interfaces.  Geographical Information Systems (GIS) and other computer-based tools for spatial display predominantly use vision to communicate information to the user, as sight is the spatial sense par excellence. Ongoing research is exploring the fundamental concepts and techniques necessary to navigate through multimodal interfaces, which are user, task, domain, and interface specific. This highlights the necessity for both a conceptual / theoretical schema, and the need for extensive usability studies.  Preliminary results presented here exploring feature recognition, and shape tracing in non-visual environments indicate multimodal interfaces have a great deal of potential for facilitating access to spatial data for blind and visually impaired persons. The research is undertaken with the wider goals of increasing information accessibility and promoting “universal access”.  

[VIEW PDF]

Multimodal virtual reality for presenting geographic information

Jacobson, R.D., Kitchin, R.M., and Golledge R.G. (2002) Multimodal virtual reality for presenting geographic information.  In: Fisher, P. and Unwin, D. (eds.) Virtual Reality in Geography. Taylor and Francis: London, pp. 382-400.

Abstract

Since the conception of virtual reality (VR) environments, interaction has been predominantly visual and haptic in nature.  Only recently have developers and scientists explored non-visual and multimodal VR environments.  In this paper we examine these recent developments and assess their viability as geographic tools for people with severe visual impairments.  Our own research and  the work of others suggests that multimodal VR, where visual interaction is either augmented by, or substituted for, other forms of data such as sound and touch, offers people with severe visual impairments access to geographic information that is in many cases otherwise inaccessible.  Such offerings open up opportunities to explore the spatial relations of geographic representations and real world environments, and could qualitatively improve their quality of life.

[VIEW PDF]

Exploratory user study of haptic and auditory display for multimodal information systems

Jeong, W. and Jacobson, R.D. (2002) Exploratory user study of haptic and auditory display for multimodal information systems. In: McLaughlin, M. L., Hespanha, J.P., and Sukhatme, G.S. (eds.) Touch in virtual Environments: Haptics and the design of interactive systems. IMSC Series in Multimedia, Prentice Hall: New York, pp. 194-204.

 Abstract

Since the inception of virtual reality (VR) environments, interaction has been predominantly visual, especially in conveying spatial information. However, in many situations vision is not enough or is not available. For example, for the visually impaired over-reliance on visual display denies them access to the information. Even for the general population, if there is no light or weak light, a visual display is not optimal for conveying information. Recently a number of researchers have tried to add other modalities, such as sound or haptics, to overcome the imitations of visual display.

[VIEW PDF]

Friday, 3 May 2013

Haptic Soundscapes

Towards making maps, diagrams and graphs accessible to visually impaired people 

The aim of this research project is to develop and evaluate haptic soundscapes. This allows people with little or no vision to interact with maps, diagrams and graphs displayed via dissemination media, such as the World Wide Web, through sound, touch and force feedback. Although of principal utility for people with severe visual impairments, it is anticipated that this interface will allow informative educational resources for children and people with learning difficulties to be developed and accessed through the Internet. The research project offers a simple, yet innovative solution to accessing spatial data without the need for vision. It builds upon previous work carried out in various departments at UCSB, and fosters inter-disciplinary links and cooperation between usually unconnected research groups. The research hopes to further knowledge and understanding in this emerging field and also to offer practical results that will impact on people's lives. It is strongly felt that the development of the project will lead to continued external funding, and it is our hope that this project will act as a springboard to further research in which UCSB will be a key component.

Further development, usability testing, and expansion
 
The Haptic Soundscapes project has developed a set of audio-tactile mapping tools to help blind people access spatial information and to help aid research in multi-modal spatial cognition. These tools offer blind people access to the geographic world they cannot otherwise fully experience, creating opportunities for orientation, navigation, and education. Spatial knowledge from maps, charts, and graphs, is obtained through display and interaction with sound, touch, and force-feedback devices. Individuals can use audio-tactile mapping tools to explore an unknown environment or create a audio-tactile map from images displayed on a computer screen. These audio-tactile maps can be disseminated over the internet, or used in educational settings. Next year, several objectives are planned for the Haptic Soundscapes project. These include cognitive experiments to assess a user’s ability to navigate within a scene, between adjacent scenes, and between scenes of different scales using the audio-tactile mapping tools. We will also expand the capability of the audio-tactile mapping system to include text-to-speech synthesis and real-time multi-dimensional sound representation. Several off-campus funding proposals will be submitted. Finally, we will showcase the tools developed in the course of this project by expanding our campus demonstrator - an interactive, navigable audio-tactile map of the UCSB campus.