IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Attentive Visual Memory for Robot Localization

Attentive Visual Memory for Robot Localization
View Sample PDF
Author(s): Julio Vega (Rey Juan Carlos University, Spain), Eduardo Perdices (Rey Juan Carlos University, Spain)and José María Cañas (Rey Juan Carlos University, Spain)
Copyright: 2014
Pages: 27
Source title: Robotics: Concepts, Methodologies, Tools, and Applications
Source Author(s)/Editor(s): Information Resources Management Association (USA)
DOI: 10.4018/978-1-4666-4607-0.ch038

Purchase

View Attentive Visual Memory for Robot Localization on the publisher's website for pricing and purchasing information.

Abstract

Cameras are one of the most relevant sensors in autonomous robots. Two challenges with them are to extract useful information from captured images and to manage the small field of view of regular cameras. This chapter proposes a visual perceptive system for a robot with a mobile camera on board that cope with these two issues. The system is composed of a dynamic visual memory that stores the information gathered from images, an attention system that continuously chooses where to look at, and a visual evolutionary localization algorithm that uses the visual memory as input. The visual memory is a collection of relevant task-oriented objects and 3D segments. Its scope and persistence is wider than the camera field of view and so provides more information about robot surroundings and more robustness to occlusions than current image. The control software takes its contents into account when making behavior or navigation decisions. The attention system considers the need of reobserving objects already stored, of exploring new areas and of testing hypothesis about objects in the robot surroundings. A robust evolutionary localization algorithm has been developed that can use both the current instantaneous images or the visual memory. The system has been programmed and several experiments have been carried out both with simulated and real robots (wheeled Pioneer and Nao humanoid) to validate it.

Related Content

Rashmi Rani Samantaray, Zahira Tabassum, Abdul Azeez. © 2024. 32 pages.
Sanjana Prasad, Deepashree Rajendra Prasad. © 2024. 25 pages.
Deepak Varadam, Sahana P. Shankar, Aryan Bharadwaj, Tanvi Saxena, Sarthak Agrawal, Shraddha Dayananda. © 2024. 24 pages.
Tarun Kumar Vashishth, Vikas Sharma, Kewal Krishan Sharma, Bhupendra Kumar, Sachin Chaudhary, Rajneesh Panwar. © 2024. 29 pages.
Mrutyunjaya S. Hiremath, Rajashekhar C. Biradar. © 2024. 30 pages.
C. L. Chayalakshmi, Mahabaleshwar S. Kakkasageri, Rajani S. Pujar, Nayana Hegde. © 2024. 30 pages.
Amit Kumar Tyagi. © 2024. 29 pages.
Body Bottom