The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
Gesture Learning by Imitation Architecture for a Social Robot
Abstract
Learning by imitation allows people to teach social robots new tasks using natural and intuitive interaction channels. Vision is the main of these channels. This chapter describes a learning-by-imitation architecture that uses stereo vision to perceive, recognize, learn, and imitate social gestures. This description is based on the identification of a set of generic components, which can be found in any learning by imitation architecture. It highlights the main contribution of the proposed architecture: the use of an inner human model to help perceiving, recognizing and learning human gestures. This allows different robots to share the same perceptual and knowledge modules. Experimental results show that the proposed architecture is able to meet the requirements of learning by imitation scenarios. It can also be integrated in complete software structures for social robots, which involve complex attention mechanisms and decision layers.
Related Content
Rashmi Rani Samantaray, Zahira Tabassum, Abdul Azeez.
© 2024.
32 pages.
|
Sanjana Prasad, Deepashree Rajendra Prasad.
© 2024.
25 pages.
|
Deepak Varadam, Sahana P. Shankar, Aryan Bharadwaj, Tanvi Saxena, Sarthak Agrawal, Shraddha Dayananda.
© 2024.
24 pages.
|
Tarun Kumar Vashishth, Vikas Sharma, Kewal Krishan Sharma, Bhupendra Kumar, Sachin Chaudhary, Rajneesh Panwar.
© 2024.
29 pages.
|
Mrutyunjaya S. Hiremath, Rajashekhar C. Biradar.
© 2024.
30 pages.
|
C. L. Chayalakshmi, Mahabaleshwar S. Kakkasageri, Rajani S. Pujar, Nayana Hegde.
© 2024.
30 pages.
|
Amit Kumar Tyagi.
© 2024.
29 pages.
|
|
|