Generative versus contingent

During one of the technical papers sessions at the recent SIGCHI (Special Interest Group on Computer / Human Interfaces) conference, two papers stood out for me because they represented perfectly opposite philosophies.

One paper looked for — and found — all the special cases where real world tools could be effectively mimicked by putting your fingers on a multitouch surface (eg: an iPad) as though you were using that tool. Some tools, like a tape measure, computer mouse, pen and eraser, can be mimicked very well on a tablet. Others, like scissors, cannot.

The authors found seven real-world tools that could be mimicked beautifully in this way, and after their live demo they got a big spontaneous round of applause from the audience.

But another paper took the opposite approach. The authors asked “what are the hand and finger gestures that are inherently powerful and expressive on a multitouch surface?” Essentially they game up with a grammar — a way of building an extremely large and extensible vocabulary of hand gestures that nobody had ever tried before.

The power of the first approach is that it is contingent: It works because there are particular real world tools that happen to map into finger positions on a multitouch screen. The power of the second approach is that it is generative: It builds from the inherent richness of what a hand and fingers can express when interacting with a flat surface.

The paper that relied on contingency was more of a crowd pleaser. But in the long run, my money is on the generative approach.

Leave a Reply

Your email address will not be published. Required fields are marked *