JISE


  [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]


Journal of Information Science and Engineering, Vol. 27 No. 3, pp. 1123-1136


Intention Learning From Human Demonstration


HOA-YU CHAN, KUU-YOUNG YOUNG+ AND HSIN-CHIA FU
Department of Computer Science 
+Department of Electrical Engineering 
+Vision Research Center 
National Chiao Tung University 
Hsinchu, 300 Taiwan


    Equipped with better sensing and learning capabilities, robots nowadays are meant to perform versatile tasks. To remove the load of detailed analysis and programming from the engineer, a concept has been proposed that the robot may learn how to execute the task from human demonstration by itself. Following the idea, in this paper, we propose an approach for the robot to learn the intention of the demonstrator from the resultant trajectory during task execution. The proposed approach identifies the portions of the trajectory that correspond to delicate and skillful maneuvering. Those portions, referred to as motion features, may implicate the intention of the demonstrator. As the trajectory may result from so many possible intentions, it poses a severe challenge on finding the correct ones. We first formulate the problem into a realizable mathematical form and then employ the method of dynamic programming for the search. Experiments based on the pouring and also fruit jam tasks are performed to demonstrate the proposed approach, in which the derived intention is used to execute the same task under different experimental settings.


Keywords: intention learning, human demonstration, motion feature, robot imitation, skill transfer

  Retrieve PDF document (JISE_201103_20.pdf)