Notes on Future Language, part 2

Technology continues to evolve. But for the near future we are still stuck with the brains we have, which have not changed in any fundamental way for the last 30,000 years.

So when we look at using our hands, in combination with any forthcoming mixed reality technology, to “create things in the air”, we should look at how humans gesture naturally. We are going to focus specifically on gestures made with the hands (as opposed to, say, nodding, shrugging the shoulders, etc).

There are four basic kinds of meaning people usually create with hand gestures: symbols, pointing, beats and icons. Symbols are culturally determined. Some examples are waving hello, fist bumping, crossing fingers, or shaking hands.

We usually point at things while saying deictic words like “this” or “that”. Beats are gestures we make while talking, usually done without really thinking, like chopping hand motions. Beats come so naturally that we even use them when talking on the phone.

Finally, icons are movements we make during speech which have a correlation to the physical world. Examples are holding the hands apart while saying “this big”, rubbing the hands together while talking about feeling cold, or holding out one hand palm down to indicate height.

Some of these types of gestures are going to be more useful than others in adding a computer-mediated visual component to speech. More tomorrow.

Leave a Reply