Showing posts with label GIS. Show all posts
Showing posts with label GIS. Show all posts

Sunday, 16 October 2016

Integrating User-contributed Geospatial Data with assistive Geotechnology Using a localized Gazetteer

Rice, M.T.,  Hammill, W.C., Aburizaiza, A.O., Schwarz, S., and Jacobson,R.D. (2011) Integrating User-contributed Geospatial Data with assistive Geotechnology Using a localized Gazetteer, Advances in Cartography and GIScience. Volume 1, 279-291.


We present a methodology for using cartographic-based processes to alert the vision-impaired as they navigate through areas with transitory hazards. The focus of this methodology is the use of gazetteer-based georeferencing to integrate existing local cartographic resources with user-contributed geospatial data. User-contributed geospatial data is of high interest because it leverages local geographic expertise and offers significant advantages in dealing with hazard information in real-time. For blind and vision-impaired people, information about transitory hazards encountered while navigating through a public environment can be contributed by end-users in the same public environment, and quickly integrated into existing cartographic resources. For this project, we build collections of user-contributed geospatial updates from email, voice communication, text messages, and social networks. Other necessary technologies for this project include text-to-voice software, global positioning devices, and the wireless Internet. The methodology described in this paper can deliver usable, cautionary reports of hazards, obstacles, or other time-variable concerns along a pedestrian network. Using the George Mason University campus as a study area, this paper describes how transitory events can be presented in usable form to a vision-impaired pedestrian within a usably short period of time after the event is reported. Buildings and other destinations of interest can be registered in a robust, eXtensible Markup Language (XML)-based, localized gazetteer. Walking networks, parking lots, roads, and landmarks are mapped as vector-based digital information. Any events or changes to the base map, whether planned and disseminated through official channels or reported by end-users, can be linked to a location in the network as established by the attributes cataloged in the localized gazetteer, and presented on an existing base map or in an assistive technology environment. For mobile applications, a vision-impaired pedestrian with a Geographic Information System (GIS) and a Global Positioning System (GPS)-enabled assistive device can receive an alert or warning about proximity to reported obstacles. This warning might include other information, such as alternative paths and relative directions to proceed, also referenced through the localized gazetteer. This research provides insight into challenges associated with integrating user-contributed geospatial in-formation into a comprehensive system for use by the blind or vision-impaired.


Monday, 12 August 2013

Crowdsourcing techniques for augmenting traditional accessibility maps with transitory obstacle information

Jacobson, R.D., Caldwell, D.R., McDermott, S.D., Paez. F. I., Aburizaiza, A.O., Curtin K.M., Stefanidis A, and Qin, H. (2013) Crowdsourcing techniques for augmenting traditional accessibility maps with transitory obstacle information Cartography and Geographic Information Science  40 (3): 210-219.


One of the most scrutinized contemporary techniques for geospatial data collection and production is crowdsourcing. This inverts the traditional top-down geospatial data production and distribution methods by emphasizing on the participation of the end user or community. The technique has been shown to be particularly useful in the domain of accessibility mapping, where it can augment traditional mapping methods and systems by providing information about transitory obstacles in the built environment. This research paper presents details of techniques and applications of crowdsourcing and related methods for improving the presence of transitory obstacles in accessibility mapping systems. The obstacles are very difficult to incorporate with any other traditional mapping workflows, since they typically appear in an unplanned manner and disappear just as quickly. Nevertheless, these obstacles present a major impediment to navigating an unfamiliar environment. Fortunately, these obstacles can be reported, defined, and captured through a variety of crowdsourcing techniques, including gazetteer-based geoparsing and active social media harvesting, and then referenced in a crowdsourced mapping system. These techniques are presented, along with context from research in tactile cartography and geo-enabled accessibility systems.


Friday, 31 May 2013

Health and Geospatial Information

Collaborators at the Faculty of Medicine are using GIS and geospatial techniques to investigate associations between variables in the geographic environment, such as access to green space, with characteristics of the health of a population.

 Potestio M.L., Patel A.B., Powell C.D., McNeil D.A. Jacobson R.D. and McLaren L. (2009) Is there an association between spatial access to parks/green space and childhood overweight/obesity in Calgary, Canada? International Journal of Behavioral Nutrition and Physical Activity, 6:77 doi:10.1186/1479-5868-6-77

Community Responses to Tourism Development in the Canadian Arctic

Community Action GIS in the Arctic

Dr. Emma Stewart's project explores how to achieve tourism development in the Canadian Arctic that is both sustainable and acceptable to local communities, and how to engage citizens effectively in the public planning process. Given predictions that Arctic waters could be substantially free of ice by 2050, the research focuses on the effects of increased tourism and shipping activity on Arctic communities. Her research aims to explore community responses to cruise tourism using a modified Public Participation Geographic Information Systems approach.

map of canadiam artic showing study sites of Pond Inlet Cambridge Bay and Churchill
Study sites in the Canadian High Artic

Schematic overview of Participatory Geographic Information System (PGIS)
Overview of a Community Action GIS

Department of Geography, University of Calgary
Arctic Institute of North America
Trudeau Foundation

Stewart, E. Jacobson, R.D. and Draper D.  (2008) Public participation geographicinformation systems (PPGIS): challenges of implementation in Churchill,Manitoba. The Canadian Geographer / Le G´eographe canadien, 52(3), 351–366.

CAG member profile The Canadian Association of Geographers Newsletter, Jan 2006 [PDF]

Multimodal speech interfaces to GIS

Multimodal speech interfaces to GIS

Ken Sam's project invloves leveraging existing commercial off the shelf (COTS) web-GIS component and open specification Speech Application Language Tags (SALT) as building blocks for creating a multimodal web-GIS application. In this paper, we will address how the different technology components were applied for creating a multimodal interfaces for the navigation, interaction and feedback for the web-based GIS application.

Screen caputure of Voice-enabled multimodal WebGIS application interface
Speech driven GIS interface
In most computing and information technology environment, data is presented in either text or graphic format as a means of conveying information to the end users. This has been the traditional paradigm of data display and visualization in the computing world. Efforts have been made in the software industry to design better navigation interfaces for software products and improve on the overall user-friendliness of the products. With geospatial data, additional dimensions are introduced in the presentation and display of the data. Because of the added complexity of geospatial data, there are a number of researches that are still on-going in trying to improve on the interface, visualization and interpretation of geospatial data. One can normally expect geospatial data to be viewed or interpreted by a normal-vision user without much challenge. Yet, visualization and navigation of map is a huge challenge for people who are visually impaired. The design and usability of GIS applications has traditionally been tailored to keyboard and mouse interaction in an office environment. To help with the visualization of geospatial data and navigation of a GIS application, this project presents the result of a prototype application that incorporates voice as another mode of interacting with a web-GIS application. While voice is not a replacement for the mouse and keyboard interface, it can act as an enhancement or augmentation to improve the accessibility and usability of an application. The multimodal approach of combining voice with other user interface for navigation and data presentation is beneficial to the interpretation and visualization of geospatial data and make GIS easier to use for all users.

Jacobson, R.D., and Sam, K. (2006) Multimodal Web-GIS: AugmentingMap Navigation and Spatial Data Visualization with Voice Control, AutoCarto 2006, June 26-28, Electronic Proceedings.

Multimodal zooming in digital geographic information

As a basic research issue, how well can people integrate and reconcile spatial information from various modalities, and how useful is such integration?

As an applied issue, what is the potential for haptic and auditory navigation within geographic information systems? Can visual information be augmented by the presentation of information via other modalities, namely, haptics and audition, and if so, to what extent?

The research will investigate a particular form of navigation within geographic information systems, namely, zooming. The research aims to investigate non-visual methods of representing or augmenting a visual zoom through the auditory and haptic senses, creating a multimodal zooming mechanism.

Transcending the Digital Divide

The purpose of this research is to develop, evaluate, and disseminate a non-visual interface for accessing digital information. The aim is to investigate the perceptual and cognitive problems that blind people face when trying to interpret information provided in a multimodal manner. The project also plans to provide touch sensitive and sound based network interface and navigation devices that incorporate cognitive wayfinding heuristics. Haptic (force feedback) interfaces will be provided for exploring web pages that consist of map, graphic, iconic or image products. Sound identifiers for on-screen windowed, map, and image information will also be provided. These tasks will contribute to transcending the Digital Divide that increasingly separates blind or vision impaired people from the growing information-based workplace. Recent research at UCSB has begun to explore how individuals identify features presented through sound and touch. Other research (e.g. O'Modhrrain and Gillespie, 1998; McKinley and Scott, 1998) have used haptics to explore screen objects such as windows, pulldown menus, buttons, and sliders; but map, graphic and other cartographic representations have not been explored. In particular, the potential of auditory maps of on-screen phenomena (e.g. as would be important in GIS applications) has barely been examined and few examples exist of combining audio and touch principles to build an interface. While imaginative efforts to build non-visual interfaces have been proceeding. there is a yet little empirical evidence that people without sight can use them effectively (i.e. develop a true representation of the experienced phenomena). Experiments will be undertaken to test the ability of vision impaired and sighted people from different age groups to use these new interface or features such as: (i) the haptic mouse or a touch window tied to auditory communication displays; (ii) digitized real sounds to indicate environmental features at their mapped locations; (iii) "sound painting" of maps, images, or charts to indicate gradients of phenomena like temperature, precipitation, pressure, population density and altitude. Tests will be developed to evaluate (i) the minimum resolvable area for the haptic interpretation of scenes; (ii) the development of skills for shape tracing in the sound or the force-feedback haptic domain, (iii) the possibility of using continuous or discreet sound symbols associated with touch sensitive pads to learn hierarchically nested screen information (e.g. locations of cities within regions within states within nations); (iv) to evaluate how dynamic activities such as scrolling, zooming, and searching can be conducted in the haptic or auditory domain, (v) to evaluate people's comprehension and ability to explore, comprehend, and make inferences about various non-visual interpretations of complex visual displays (e.g. maps and diagrams), and (vi) to explore the effectiveness of using a haptic mouse with a 2" square motion domain to search a 14" screen (i.e. scale effects).

Tuesday, 21 May 2013

Supporting Accessibility for Blind and Vision-impaired People With a Localized Gazetteer and Open Source Geotechnology

Rice, M.T., Aburizaiza, A.O, Jacobson,R.D, Shore , B.M., and Paez. F I.  (2012). Supporting Accessibility for Blind and Vision-impaired People With a Localized Gazetteer and Open Source Geotechnology. Transactions in GIS 16 (2):177-190.
Disabled people, especially the blind and vision-impaired, are challenged by many transitory hazards in urban environments such as construction barricades, temporary fencing across walkways, and obstacles along curbs. These hazards present a problem for navigation, because they typically appear in an unplanned manner and are seldom included in databases used for accessibility mapping. Tactile maps are a traditional tool used by blind and vision-impaired people for navigation through urban environments, but such maps are not automatically updated with transitory hazards. As an alternative approach to static content on tactile maps, we use volunteered geographic information (VGI) and an Open Source system to provide
updates of local infrastructure. These VGI updates, contributed via voice, text message, and e-mail, use geographic descriptions containing place names to describe changes to the local environment. After they have been contributed and stored in a database, we georeference VGI updates with a detailed gazetteer of local place names including buildings, administrative offices, landmarks, roadways, and dormitories. We publish maps and alerts showing transitory hazards, including location-based alerts delivered to mobile devices. Our system is built with several technologies including PHP, JavaScript, AJAX, Google Maps API, PostgreSQL, an Open Source database, and PostGIS, the PostgreSQL’s spatial extension. This article provides insight into the integration of user-contributed geospatial information into a comprehensive system for use by the blind and vision-impaired, focusing on currently developed methods for geoparsing and georeferencing using a gazetteer.

Sunday, 19 May 2013

Implementing Auditory Icons in Raster GIS

MacVeigh, R. and Jacobson, R.D. (2007) Implementing Auditory Icons in Raster GIS, Proceedings of the 13th InternationalConference on Auditory Display, Montréal, Québec, Canada, 530-535.


This paper describes a way to incorporate sound into a raster based classified image. Methods for determining the sound location, amplitude, type and how to create a layer to store the information are described. Hurdles are discussed and suggestions of how to overcome them are presented. As humans we rely on our senses to help us navigate the world. Sight, sound, touch, taste, and smell; they all help us perceive our environment. Although we sometimes take vision for granted, all our other senses play as important of a role in our daily lives. Even with all these senses at our disposal, the conventional GIS very
uncommonly do much more than convey their information visually. We demonstrate an auditory display with a sample implementation using a classified raster image, commonly used in a GIS analysis. This was achieved using a spatial sonification algorithm initially created in a Java environment. The ultimate aim of this work  is to develop an interactive mapping technology that fully incorporates auditory display, over a variety of platforms and applications. Such a tool would have the potential be of great benefit for displaying multivariate
information in complex information displays.


Saturday, 18 May 2013

Multimodal Web-GIS: Augmenting Map Navigation and Spatial Data Visualization with Voice Control

Jacobson, R.D., and Sam, K. (2006) Multimodal Web-GIS: Augmenting Map Navigation and Spatial Data Visualization with Voice Control, AutoCarto 2006, June 26-28, Electronic Proceedings.


This paper describes the design and architecture of a prototype project that was implemented to augment the navigation and visualization of geospatial data for a web-GIS application.  This project leverages existing commercial off the shelf (COTS) web-GIS component and open specification Speech Application Language Tags (SALT)  as building blocks for creating a multimodal web-GIS application.  In this paper, we will address how the different technology components were applied for creating a multimodal interfaces for the navigation, interaction and feedback for the web-based GIS application.  The design, integration process and the architecture of the prototype application are covered as a part of this project report.


Friday, 17 May 2013

Haptic Soundscapes: Developing novel multi-sensory tools to promote access to geographic information

Jacobson, R.D. (2004) Haptic Soundscapes: Developing novel multi-sensory tools to promote access to geographic information. In: Janelle,D., Warf, B., and Hansen, K (eds.) WorldMinds: Geographical Perspectives on 100 problems. Kluwer: Dordrecht, pp 99-103.


This essay explores the critical need for developing new tools to promote access to geographic information that have throughout history been conventionally represented by maps. This problem is especially acute for vision-impaired individuals. The need for new tools to access map-like information is driven by the changing nature of maps, from static paper-based products to digital representations that are interactive, dynamic, and
distributed across the Internet. This revolution in the content, display, and availability of geographic representations generates a significant problem and an opportunity. The problem is that for people without sight there is a wealth of information that is inaccessible due the visual nature of computer displays. At the same time the digital nature of geographic information provides an opportunity for making information accessible to non-visual users by presenting the information in different sensory modalities in computer interfaces, such as, speech, touch, sound, and haptics (computer generated devices that allow users to interact with and to feel information).


Thursday, 16 May 2013

GIS and people with visual impairments or blindness: Exploring the potential for education, orientation, and navigation

Jacobson, R.D. and Kitchin, R.M (1997) GIS and people with visual impairments or blindness: Exploring the potential for education, orientation, and navigation. Transactions in Geographic Information System, 2(4), 315-332.


 GIS, with their predominantly visual communication of spatial information, may appear to have little to offer people with visual impairments or blindness. However, because GIS store and manage the spatial relations between objects, alternative, nonvisual ways to communicate this information can be utilized. As such, modified GIS could provide people with visual impairments access to detailed spatial information that would aid spatial learning, orientation, and spatial choice and decision making. In this paper, we explore the ways that GIS have been, and might be, adapted for use by people with visual impairments or blindness. We review current developments, report upon a small experimental study that compares the ability of GIS-based and various adaptive technologies to communicate spatial information using non-visual media, and provide an agenda for future research. We argue that adapted GIS hold much promise for implicitly improving the quality of life for visually impaired people by increasing mobility and independence.


Friday, 3 May 2013

Haptic Soundscapes

Towards making maps, diagrams and graphs accessible to visually impaired people 

The aim of this research project is to develop and evaluate haptic soundscapes. This allows people with little or no vision to interact with maps, diagrams and graphs displayed via dissemination media, such as the World Wide Web, through sound, touch and force feedback. Although of principal utility for people with severe visual impairments, it is anticipated that this interface will allow informative educational resources for children and people with learning difficulties to be developed and accessed through the Internet. The research project offers a simple, yet innovative solution to accessing spatial data without the need for vision. It builds upon previous work carried out in various departments at UCSB, and fosters inter-disciplinary links and cooperation between usually unconnected research groups. The research hopes to further knowledge and understanding in this emerging field and also to offer practical results that will impact on people's lives. It is strongly felt that the development of the project will lead to continued external funding, and it is our hope that this project will act as a springboard to further research in which UCSB will be a key component.

Further development, usability testing, and expansion
The Haptic Soundscapes project has developed a set of audio-tactile mapping tools to help blind people access spatial information and to help aid research in multi-modal spatial cognition. These tools offer blind people access to the geographic world they cannot otherwise fully experience, creating opportunities for orientation, navigation, and education. Spatial knowledge from maps, charts, and graphs, is obtained through display and interaction with sound, touch, and force-feedback devices. Individuals can use audio-tactile mapping tools to explore an unknown environment or create a audio-tactile map from images displayed on a computer screen. These audio-tactile maps can be disseminated over the internet, or used in educational settings. Next year, several objectives are planned for the Haptic Soundscapes project. These include cognitive experiments to assess a user’s ability to navigate within a scene, between adjacent scenes, and between scenes of different scales using the audio-tactile mapping tools. We will also expand the capability of the audio-tactile mapping system to include text-to-speech synthesis and real-time multi-dimensional sound representation. Several off-campus funding proposals will be submitted. Finally, we will showcase the tools developed in the course of this project by expanding our campus demonstrator - an interactive, navigable audio-tactile map of the UCSB campus.