Interactive Virtual Human



For a virtual character to come across as responsive and emotive in real-time, it must appear to first sense the user’s movements, facial expression, hand gestures, and even intent (via say in a rock-paper-scissors game), and then process those sensory signals. To do this I designed, implemented, and tested several sophisticated real-time bio, gesture, and movement sensors in our system. A data glove sensor is used to capture the user’s hand gestures.



A Microsoft Kinect 3D camera was used to locate the user’s head to guide the virtual character’s gazes. In addition, the character adjusts his personal space with the user as she gets too close.



Various sensors, such as a Kinect 3D camera and overhead cameras were used to send streams of input such as users’ coordination in the space, users’ heights or environmental information such as noise and light. The character reacts dynamically and in real-time to these inputs. The SHORE application developed by Fraunhofer research center, was used to receive the input stream from a webcam and forward information such as emotional expression of the user’s facial expressions (and age, sex) to RealAct. The electromyography (EMG) sensor was used to measure activity of the facial muscles to detect the user’s smile in real-time.