The movement of heads

We have a demo now working at our lab at NYU in which two people sitting at different computeers, and each wearing an Oculus Rift DK2 virtual reality headset, can “see” the other’s head as a box with a face painted on it. To get to this point required of a lot of hard work by students here — mainly Zhu Wang.

We are deliberately keeping it simple: No fancy graphics or high polygon count, just head movement. And the results are spectacular.

When you are in the experience, even though all you can see is a silly box floating in space, you can really “see” the other person via their movement. And the longer you look at them, the more real and vivid they seem — as though your mind is relearning how to see that person from only motion cues.

For example, after the obvious things, like looking at something, or nodding yes or shaking the head no, we tried doing an “I don’t know” gesture, which mainly consists of shrugging the shoulders while doing a subtle little tilt movement with the head.

We could each plainly see that the other person was shrugging their shoulders, even though no shoulders were visible. It seems we were effectively transmitting the “shrug” gesture just from the subtle movement and timing of our head movement.

Of course we will continue to add things like upper body movement, hands and fingers, eye gaze, mouth position, movement of objects, and more, to the information transmitted. Yet I wonder whether we’ve already hit a kind of perceptual sweet spot just through the movement of heads.

One thought on “The movement of heads”

  1. In the videogame world, people often use head movements, and jumping and crouching as some sort of rudimentary communication or self-expression. I’d imagine that those actions are like animal grunts compared to what you guys will see in your VR setups.

Leave a Reply

Your email address will not be published. Required fields are marked *