Researchers at Japanese company Different Dimensions, have developed a ‘virtual humanoid’ prototype which will allow users to physically interact with a computer-generated character seen through a head-mounted display.
The so-called “virtual humanoid” is an example of “mixed reality” – a technology which takes augmented reality to the next level by giving virtual objects a physical counterpart in the real world. Video images from the user’s perspective – taken from the camera mounted to the HMD – are combined with a computer-generated character that moves in synch with the robot. The user can then touch and physically interact with the virtual character through the robot.
The computer-generated character image is super-imposed over the robot using green screen technology similar to that used in TV and movies. Strain gauges in the robot’s arms detect movement, allowing users to physically interact with it
The digital character’s movements were based on those of the robot, allowing them to synch up. The character can be seen from all angles and will carry on simple conversations through speech recognition and speech synthesis
The new prototype can tilt and turn its neck and move its arms at the shoulders, elbows, and wrists. The servo motors which power each joint contain position sensors which read the joint angles – data which is fed to the software program controlling the virtual character – thereby synchronizing their movements. The actual software used is called MMDAgent, which creates an interactive 3D anime-style character that can carry on conversations using speech recognition and a synthesized voice.
The original prototype called U-Tsu-Shi-O-Mi, developed in 2006, was too costly to commercialize and presented several challenges. The new version has been shrunk to 60 percent of its original size and will sell for between $4,800 and $5,300.