JISE


  [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]


Journal of Information Science and Engineering, Vol. 29 No. 6, pp. 1265-1283


Video-Driven Creation of Virtual Avatars by Component-based Transferring of Facial Expressions


CHENG-CHIN CHIANG, ZHIH-WEI CHEN AND CHEN-NING YANG
Department of Computer Science and Information Engineering 
National Dong Hwa University 
Hualien, 974 Taiwan


    This paper proposes an efficient and economic video-driven technique that enables the instant creation of a large diversity of virtual avatars and the automatic syntheses of vivid facial animations. The proposed technique addresses the expression transferring problem which transfers a given facial expression of a source human character to the corresponding one of a synthesized avatar. In tackling the expression transferring problem, we propose a component-based approach which is more appealing than the existing approaches which treat the whole face as a single unit for expression transferring. Our approach acquires a much higher diversity in synthesizing virtual avatars and facial expressions by composing the synthesized target face from the facial components of different avatars. The proposed method achieves a good way to transfer the synthesizing parameters acquired from the source human face to those of the target avatar face which complies well with the person-specific characteristics of the target avatar. Additionally, the removal of color inconsistencies among the facial components from different avatars is also well handled. Some experimental results are demonstrated to show that the proposed method can achieve interesting and colorful transfers of facial expressions and synthesize a large diversity of virtual avatars instantly.


Keywords: facial expression synthesis, active appearance model, color correction, facial feature tracking, virtual avatar

  Retrieve PDF document (JISE_201306_12.pdf)