Virtual cognitive model for Miyazawa Kenji based on speech and facial images recognition

Authors: Hamido Fujita, Jun Hakura and Masaki Kurematsu

In this paper we a representing a virtual interactive model based on cognitive model of Miyazawa Kenji. We created a computer model based on cognitive thinking of Kenji literature on story telling. The user can interact in real time with Virtual Kenji. The facial gestures been collected and analyzed through Motion capture system consists of six camera. These six cameras set to collect all emotional facial gestures of people who read and practice an recorded assigned Kenji manuscripts for experiment. Each person has 50 markers of 5 mm size attached to all parts of the face (lips, mouth, eyebrow, moustache, eyelash, forehead). The emotional linkage between these facials parts and cognitive emotion been analyzed and recorded. We have proposed a database; called as Facial recognition database based on FACS model, Also we have correspondingly, speech synthesis part that would analyze the emotional part of human speech. These synthesized two parts are been re-constructed on hologram that represents the cognitively the character of Kenji virtual model who has a face with gestures harmonize with a speech and facial images generated by the system. Also, the system interacts with the human user based on collected observed response on human user and inference by the system in real time.