Optimized non-visual thumb-chording

If you start from scratch, aiming to create the most efficient possible thumb-chording that does not require looking at your SmartPhone, you can do a lot better than Braille. For example, let’s start with the fact that over 75% of all characters typed are one of SPACE,E,T,A,O,N,R,I,S,H,D,L,C,U (roughly in descending order of frequency).

We can use the entire SmartPhone screen to lay out a 5×3 thumb keyboard with the following arrangement:

E

U

C

L

T

O

H

S

A

I

D

R

N

This keyboard has already taken care of more than 3/4 of all the characters you’ll ever type (we include two ways to type SPACE because it’s by far the most frequently typed character).

We can then get an additional 75 additional characters by “chording” with both thumbs at once, with the left thumb on one of the three left-most columns, and the right thumb on one of the three right-most columns. 75 characters is far more than needed to type all the letters, digits, punctuation and special characters on a keyboard.

I think this keyboard is going to be very fast to type on. Since all the other characters after those first 14 are used much less frequently, having them as two-thumb chords won’t slow you down very much.

Not only does this arrangement allow non-sighted people to type quickly, but it can also be used by anyone in a situation where you need to type quickly into your cellphone without looking at it (eg: during a meeting).

As is usual for keyboards for the non-sighted, there would be a tutorial mode that allows you to drag your thumbs over the keyboard to sound out each character.

Braille thumb-chording

About two years ago I wrote a series of posts here about using Braille on SmartPhones. Then just today I had a conversation with a visually impaired colleague, and discovered that there are still no fast text entry methods on SmartPhones if you are blind. Yes, there are text entry methods, but no fast ones. So I proposed the following approach, which I am going to try to implement on my handy dandy new Android phone.

Any Braille character consists of some combination of three dots on a left column, and some combination of three dots on a right column. For example, below are the 26 letters of the alphabet. Other six-dot combinations represent punctuation, numbers, and special characters:



I propose a thumb-chording method where your left thumb touches an on-screen button showing the dot combination for the left column, while your right thumb does the same for the right column. For characters where only one column has any dots, just use one thumb.

You can arrange these on-screen buttons as follows (in the image below, red represents buttons for the left thumb, blue for the right thumb):



This paragraph is for non-sighted people reading this: There are three rows, with five buttons in each row. Top row: L100 L010 L001 R111 R101. Middle row: L110 L011 unused R011 R110. Bottom row: L101 L111 R001 R010 R100.

Because there are so few buttons, each button is easy to find with your thumb, once you practice a bit (and that unused middle space could be used for meta-commands). In the actual app, you’ll be able to learn the button positions by dragging a thumb around, and the phone will sound out which button that thumb is over (and then you tap with your thumbs to actually type characters).

Painting with Kinect

This evening I finally got around to implementing a simple paint program with the Microsoft Kinect. I’m not using it the way you’re “supposed” to. For one thing, I’m pointing it down at a table from above, rather than forward toward me. For another, I’m ignoring all of Microsoft’s cool body recognition software, and instead using it just as a 3D camera (that is, a camera that gives you both color and distance).

After an evening of hacking, my paint program (well, it’s really more of a finger painting program) is still pretty crude. There’s no color yet, just black and white, like drawing with charcoal. But it does detect pressure, and it has my three favorite operations: add white, add black, and smudge. And of course it’s multitouch. ๐Ÿ™‚

You can see from the first image below that the program does a pretty good job of detecting pressure (note the progression from the “1” to the “4”):



One cool thing about using the Kinect is that I’m getting an enormous amount of hand-shape information from that 3D camera. If I change the way I hold my hand, then instead of drawing white, fingers that are touching the surface will draw black:



If I hold my hand yet a third way, fingers that are touching the surface will smudge the drawing:



It’s not yet clear to me what are good hand positions for each of these things. It’s a very rich space. For example, I can use the 3D camera image to figure out the difference between straight and curled fingers, knuckle up or knuckle flat, wrist up or down, or hand leaning to one side. There is such a wealth of possibility to choose from. And I haven’t even gotten started on gestures that use both hands, which will allow a far greater wealth of expressive possibilities.

This painting program is really just a way to get started on “pointing a Kinect down at a table” interactions. What I really want to do this summer is create ways to puppeteer interactive virtual worlds via hand gestures — from music to character animation to lighting and camera cues.

Looks like it’s going to be a fun summer. ๐Ÿ™‚

The fire this time

The world didn’t come to an end today after all, thereby sparing most of us a whole lot of inconvenient fire and brimstone.

In celebration of not having to go to hell, it seems like a good day to revisit an earlier post, in which I had talked about a more hopeful sort of fire. In particular, Sharon recently wrote the following comment on my post about education entitled Lighting a Fire:

“As someone who studies games for learning, what would you say is the fire triangle for gaming? Does the concept of the fire triangle help relate gaming to learning? I got to thinking about this after reading this post because when I was a kid I used to love playing games (especially board games). I couldnโ€™t get enough of them. As an adult I find that I donโ€™t usually have a lot of patience for them. Now I get the kind of intellectual satisfaction that I used to get from playing games from constructing and debugging computer programs (for a similar amount of intellectual effort), and the payoff in relevance and meaning is a lot higher. Iโ€™d be interested to hear your thoughts.”

It’s a very thoughtful comment. In my post I had related the three ingredients for lighting a fire to the three preconditions for learning as follows:

Fuel:

Intellectual curiosity

Oxygen:

A sense that what is learned
is relevant or meaningful

Heat:

The excitement that comes from true learning

It seems to me that this all translates directly into how games relate to learning, and why that relationship can change as one grows up. My experience with games is quite similar to Sharon’s. When I was a kid I played a lot of games, and now most of that energy has shifted to creating things on my computer. In fact, when I try to play computer games, I usually get an overwhelming urge to stop playing and write a computer program instead.

I suspect that this is because playing a game is active (as opposed to seeing a movie or a play, which is passive). If I’m going to be doing something that involves actively making choices, I generally prefer to be making choices that come from me — that really represent who I am — as opposed to just the illusion that they come from me (which is what a commercial computer game generally offers).

I would argue that the fire triangle of learning describes this situation perfectly. When we are children, we have a greater ability to learn from games — because children are veritable learning machines, and they are able to learn from any activity that involves making choices.

But as we get older, we generally lose much of this childlike ability to learn from such externally guided interactions. Yet many of us still want that excitement that comes from true learning. So we turn to a kind of game that is uniquely relevant to who we are. In the case of Sharon or myself, that game can be programming.

When we grownups feel that hunger to learn, we still require the same three ingredients for learning: (i) intellectual curiosity, (ii) a sense that what is learned is relevant or meaningful, and (iii) the excitement that comes from true learning. But since we can no longer get that spark from externally defined games, we create our own games.

Programming, like any of the creative arts — from sculpture to songwriting to keeping a daily blog — is at its core a kind of game for learning.

Writing in the dark

I was at a wonderful concert this evening, in which I wanted to write down my thoughts about what I was experiencing. But I wanted to do this in such a way that I wouldn’t distract the people around me.

I realized that what I really wanted was some sort of technology that would allow me to write things down in a way that was completely invisible to the people around me.

In such a situation, writing on a standard SmartPhone or similar PDA is completely inappropriate. No thoughts you have during a shared experience are so important that they justify that annoying glowing rectangle which distracts other audience members and takes them out of their own experience of what is happening onstage.

I am coming up with some interesting ideas for how to do this right, which I will describe here when they are fully developed. ๐Ÿ™‚

Those few moments

Having just experienced a particularly whirlwind week — a very nice week, but a whirlwind week nonetheless — one that was split up into multiple experiences with multiple casts of characters, I realize more acutely than usual how much peace of my mind depends upon grabbing those few moments (sometimes those few precious moments), when there is nobody around but myself.

Just as food needs to be digested, so do experiences. We need those moments of quiet reflection to sort it all through, to assess what has happened, what dramatic changes have just taken place in these latest scenes of one’s life. Without the little pauses between the action, our moments can tend to run together into one undifferentiated blur.

The next time you are tempted to fit in that one extra meeting, to go to one more place, squeeze in just one more activity, it might be good remember that the soul, like the body, also needs its rest.

It is certainly something that I would do well to keep in mind more often than I do. ๐Ÿ™‚

Hero’s journey Mad-Libs

Yesterday I saw a presentation by enthusiastic improvisational theatre troupe that calls itself “Story Pirates”. They work with little kids, helping those kids to build confidence in their creativity. The particular way that they work is to convert stories from kids into theatre pieces, which they then act out, with verve, humor and fun.

In yesterday’s presentation, they worked with the kids to a really fun little seven minute epic, which ended up as something like this:

“Shiny Pants was sad because he was incomplete — ne needed shiny shoes. One day he meets a friend, Shiny Shirt. They decide to go together to find Shiny Shoes. The run into Toby the Witch, who tells them that to find the shiny shoes, they will need to journey to Shiny Island. On the way they run into the dreaded Toilet Paper Mummy, but they overcome this obstacle, make it to the island and find the Shiny Shoes. In the end they realize that they are happy, not because they found the Shiny Shoes, but because they found each other — they are now a family.”

Although when I saw a demonstration of the troupe in action, what I saw was more like a kind of Mad Libs game — the actors would ask the kids such questions as “what’s the name of the hero,” and then make a story around that. Although the kids thought they were creating a story, everything but the specific names actually came from the improv troupe.

As I thought about what this underlying story was, I realized it was the classic Hero’s Journey, as described by Joseph Campbell in The Hero with a Thousand Faces. Here is my diagramming of this underlying story:


What the kids provided was just a naming of the pieces of this diagram:



Hopefully, in their extended workshops the Story Pirates bring kids in on this underlying process. Because that’s what it will take to really help kids learn how to be great storytellers.

Apocalypse now

All over New York City this past week, at bus stops and various random places, I’ve been seeing advertisements explaining that the world is going to end on May 21. It seems that folks more clever at numbers than you and I have calculated that date as the precise day of the Second Coming (here’s the back story).

I haven’t done anything in response to this because, frankly, I’m a procrastinator. But now that we’re only five days from the Apocalypse, I feel perhaps it’s time to adjust my thinking to factor in the whole, you know, end of the world thing.

For example, I now finally understand why Broadway shows have no-refund-or-exchange policies. Imagine how business on the Great White Way would suffer if everyone started refunding their tix. And what about those unscrupulous folks who keep selling those orchestra seats to suckers who haven’t heard about the End of Days? I mean, how stupid would you feel if the end of the world came and you were the one left holding useless tickets to “The Lion King”?

And I’m very curious to learn how all of those Hindus and Buddhists will react when they wake up the morning of May 22 only to find that, well, there is no May 22. Are they going to get the word from the Big Guy himself? Or will He cop out and send some angel or seraph to give the bad news.

I’m trying to picture the scene: Billions of assembled Buddhists and Hindus — men, women and children — all standing around nervously. They’ve heard something’s up. Suddenly a booming celestial voice starts to explain: “Um, We don’t really know how to tell you this folks, but all this time you’ve been betting on the wrong horse. Tough break.”

At which point somebody will probably speak up, maybe a Buddhist: “But what about all our good works?” this Buddhist guy will protest. “You know, our own version of righteousness. Feeding the poor, easing suffering as a part of our religious practice?”

“And the cows,” some Hindus will chime in at this point. Don’t forget about the cows.”

“Yeah, what about the cows,” the Buddhists will agree, nodding vigorously, “Those guys have the whole cow thing. That’s gotta count for something, right?”

“Afraid not.” The celestial voice is starting to grow impatient. “Look, we’ve got an Apocalypse to run here. There’s lots of details, things to sort out, you wouldn’t understand. Do you people have any idea how complicated an operation like this is? Jesus! Now why can’t you all be good sports and just go to hell?”

Rich interfaces

We are just about to enter, for the first time, an age of rich human/computer interfaces. It is true that advanced techniques, beyond the impoverished model of Windows/Icons/Mouse/Pointer, have existed for years. What has not been true is that millions of ordinary folks at home have had access to them.

Video cameras haven’t done the trick. If you try to use a computer figure out what your fingers are doing by pointing a video camera down at your hands, you run into all kinds of problems. Skin color variations, lighting changes and depth ambiguity all work against you. A technique that works just fine in the morning might fail miserably once the afternoon sun starts streaming through your window.

But now that the Kinect provides a cheap 3D camera available to millions (and soon to get both cheaper and better, especially when competitors start jumping in), it’s easy to write software that tracks fingers accurately and reliably.

The big question is now not how, but what. Will we end up using the full richness of our hands and fingers when we use computers, or will we (collectively) cop-out, and end up with some boring variant on the pinch gesture?

My hope is that we will take seriously the language building powers of small children. There is strong linguistic evidence that natural language is actually created by small children (children younger than 6 y.o.), rather than adults. Among evidence to support this, is work by Ann Senghas on the creation by small children of Nicaraguan Sign Language in a very short amount of time.

Also, there has been work by Derek Bickerton and others on how small children spontaneously created Hawaiian Creole in only a few years, a language as complex as any other natural language.

The possibility of tapping into this capability of small children makes Kinect and similar technologies particularly exciting as potential platforms for human/computer interfaces far richer than any we have ever seen.