New version page

Purdue CS 59000 - A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation

Documents in this Course
Lecture 4

Lecture 4

42 pages

Lecture 6

Lecture 6

38 pages

Load more
Upgrade to remove ads

This preview shows page 1-2 out of 5 pages.

Save
View Full Document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Upgrade to remove ads
Unformatted text preview:

A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation Pelin Angin, Bharat Bhargava Department of Computer Science Purdue University West Lafayette, IN, USA {pangin, bb} @cs.purdue.edu Sumi Helal Dep. Computer and Information Science & Engineering University of Florida Gainsville, FL, USA [email protected] Abstract—Context-awareness is a critical aspect of safe navigation, especially for the blind and visually-impaired in unfamiliar environments. Existing mobile devices for context-aware navigation fall short in many cases due to their dependence on specific infrastructure requirements as well as having limited access to resources that could provide a wealth of contextual clues. In this work, we propose a mobile-cloud collaborative approach for context-aware navigation, where we aim to exploit the computational power of resources made available by Cloud Computing providers as well as the wealth of location-specific resources available on the Internet to provide maximal context-awareness. The system architecture we propose also has the advantages of being extensible and having minimal infrastructural reliance, thus allowing for wide usability. A traffic lights detector was developed as an initial application component of the proposed system and experiments performed to test appropriateness for the real-time nature of the problem. Keywords-mobile; cloud; navigation; context-awareness I. INTRODUCTION Mobility is important for the quality of life. The ability to see, hear, and experience the context of the environment is critical for safety. Visually impaired or blind persons rely on their previous knowledge of an environment to navigate, usually getting help from guide dogs or the white cane, which leaves them handicapped in achieving desired level of mobility and context-awareness especially in unknown environments. Existing navigation systems for the blind and visually impaired people provide some level of help, but fail to address the important aspects of context-awareness, safety and usability. They also are not open and not designed for extensibility, which makes them unable to integrate or take advantage of the newer, more advanced technology and the wealth of relevant Internet resources. Most of these systems depend heavily on the underlying infrastructure, limiting their use in places where the infrastructure requirements are not met. Context information provided to the user by the available devices is usually very limited and the devices aiming to provide more detailed information (such as recognizing particular classes of objects) sacrifice portability, which is undesirable especially for long trips. Much can be done to enhance the experience and increase the safety and capabilities of individuals in navigating freely in buildings, college campuses, and cities. By providing maximal awareness of the environment and its contexts, without requiring any modification to the existing infrastructures, the quality and experience of navigation will be significantly enhanced for the blind user as well as other users in unfamiliar environments. The urban world is becoming more complex every day with advances in technology, products of which such as quite cars, make it more difficult especially for the blind and visually-impaired to fully sense their environment. Existing route planning devices provide guidance in terms of directions to follow, but fail to address important safety issues such as when to cross at an intersection, which requires awareness of the status of traffic lights and dynamic objects such as cars. Accurate and fast object recognition and obstacle detection, which require the use of computationally intensive image and video processing algorithms, are becoming increasingly important for systems aiming to help the blind navigate independently and safely. The limited computational capacity and battery life of currently available mobile devices make fast and accurate image processing infeasible when the devices are to be used in isolation (i.e. without communicating with any external resources for the computations). In this work, we propose a context-rich, open, accessible and extensible navigation system, bringing the quality of the navigation experience to higher standards. We use currently available infrastructure to develop an easy to use, portable, affordable device that provides extensibility to accommodate new services to help in high quality navigation as they become available. This paper particularly focuses on a traffic lights detector developed as an initial component of the proposed context-aware navigation system, which we aim to build on in future work. The rest of the paper is organized as follows: Section II discusses previous work in the area of mobile navigation devices; Section III describes the proposed mobile-cloud collaborative blind navigation system architecture; Section IV provides a description of the mobile-traffic light recognizer we developed as an initial component of the proposed system and experiment results are provided in Section V. Section VI concludes the paper with future work directions. II. RELATED WORK Systems based on different technologies have been proposed for the task of helping the blind and visually-impaired find their way at indoor and outdoor locations. After the introduction of the Global Positioning System(GPS) in the late 1980s, many systems based on the GPS to help the visually impaired navigate outdoors were proposed and some were commercially released. Among those systems are the LoadStone GPS (http://www.loadstone-gps.com/), Wayfinder Access (http://www.wayfinderaccess.com/), BrailleNote GPS and Trekker by Humanware (http://www.humanware.com), and StreetTalk by Freedom Scientific (http://www.freedomscientific.com). The disadvantage of these GPS based devices is that their use is limited to outdoor environments and they provide limited contextual information during navigation. Drishti [1] developed at the University of Florida provides both indoor and outdoor navigation help, taking into account the dynamic changes


View Full Document
Download A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?