Archive for May, 2011


Tuesday, May 31st, 2011

noun, adjective, ith·ne·on, ith·ne·on·ic.


The fundamental particle of scientific self-importance. Literally, an acronym for “It has not escaped our notice”

Evidence of this particle is generally found near the end of articles in scientific journals, as a way to signal that the authors’ contribution to the field will shake the foundations of reality, utterly change all life as we know it, rip open the very fabric of the Universe itself, and forever establish the authors as veritable Gods, deserving of awed worship by every man, woman and child on the planet.

A single particle of ithneon suffices to convey this message with just the right tone of sincere humility.

Origins and preferred usage:

Ithneon was first discovered in an article by James Watson and Francis Crick proposing the double helical structure of DNA (Nature 171: 737-738 (1953)). Near the end of that article the authors state:

“It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”

Since then, various ithneon particles have periodically been spotted. For example, near the end of a recent article by Dongying and Martin Wu, Aaron Halpern, Douglas Rusch, Shibu Yooseph, Marvin Frazier, Craig Venter and Jonathan Eisen, the authors state:

“It has not escaped our notice that the characteristics of these novel sequences are consistent with the possibility that they come from a new (i.e., fourth) major branch of cellular organisms on the tree of life.”

Similarly, near the end of a recent article in The American Scientist entitled “The Origin of Life”, James Trefil, Harold J. Morowitz and Eric Smith state:

“It has not escaped our notice that the mechanism we are postulating immediately suggests that life is widespread in the universe, and can be expected to develop on any planet whose chemistry resembles that of the early Earth.”


Ithneon is clearly a powerful particle in Nature. Its trajectory is capable of describing, among other things:

• how all life replicates and evolves;
• a fourth fundamental branch of life we never even knew about;
• life on other planets.

Not bad for a particle.

Death of the laptop

Monday, May 30th, 2011

Playing around with the Kinect, I realize that it’s only a matter of time (and not that much time) before a 3D camera will be built into something like the iPad — although it might not be an iPad, but rather some roughly equivalent tablet by Samsung, Toshiba, HP, or whomever.

Such a 3D camera will be able to do far more than detect multiple finger touches. It will be able to see the entire position of your hands and your fingers — not just touches upon a surface but gestures in the air above your tablet, or perhaps over the tabletop in front of your tablet.

And when that happens, the laptop computer as we know it may no longer serve any purpose. After all, an input system that detects not just touches but the actual movement of your fingers in the air can be far more responsive than any mere mechanical keyboard. Such an input device could even correct errors by figuring out from your finger movement which key you had intended to type.

Tablets sporting 3D cameras might be just around the corner. And the days of the laptop computer may be numbered.

Last night near Le Pont Mirabeau

Sunday, May 29th, 2011

Last night near Le Pont Mirabeau
While I was in a fitful sleep
You came to me within a dream
      And spoke as though you were alive.

You looked thin but not too bad
That is, all things considered. And
We talked a while of nothing much.
      It was good to hear your voice.

Conversation turned to visions,
Ruins of cities yet to come,
For what we love must go away
      And what we build will fall apart.

I watched in silence, in my dream,
So much sadness in your tale,
For what we love must go away
      But it was good to hear your voice.

Mood maps

Saturday, May 28th, 2011

Sometime in the not too distant future we will have the technological means to monitor our mood throughout the day. And once that happens, people will start correlating, using the same sorts of analytics currently used by major internet search engines.

And surprises will emerge. These surprises may include the mood changing effects of encountering certain people in the course of the day, eating certain foods, getting particular kinds of exercise, walking through rooms with blue colored walls, or just slipping on a fresh pair of socks.

We’ll all be able to summon up the “Google map” of our own psychic make-up, charting the influences upon that psyche. People will start to be able to fine-tune their day, to soothe their neuroses and optimize their moment by moment personal experience of life.

Which may not necessarily be all to the good. Without those ragged edges, those rude surprises, all the odd little threads of psychic distress that dangle off the buttonholes of our existence, we might find ourselves just a little less creative, a bit less prepared for the unexpected.

Of course, my opinion on this might change later today.

Depending on my mood. :-)

“School for Lies” – a review

Friday, May 27th, 2011

This week I saw a play by David Ives. A fascinating glimpse into the lives of characters straight out of Molière, the play is very funny, with an air of parody of parody — a farce, and yet it’s very “meta”. If you parse the levels of the humor, in a way you’re seeing two distinct, divergent sorts of play.

This weaving of two levels keeps us guessing, when dialog may be, in fact addressing, not the situation up on stage, but us, the audience. It’s all the rage, this splitting of each character in two. It’s very entertaining — very “new”. But sometimes it came off as sort of rude, like when characters surprisingly said “dude”. The shock of it will clearly make us laugh, and yet it kind of splits the thing in half.

Like Molière, the playwright uses rhyme to keep words flowing, shifting on a dime. The dialog, composed of rhyming couplets, keeps things fast, the mood is very up. Let’s take a moment though to really question whether it’s the power of suggestion that makes us think that making things “poetic” (while keeping all activity frenetic), equates to wit and makes it all seem new. Hell, even a mere blogger’s play review can do the same. Oh well, whatever. Hey, all in all I really loved the play.

Sand painting

Thursday, May 26th, 2011

Vi Hart came around the lab today and tried out the Kinect-based finger painting program I made a few days ago.

She started drawing shapes, and I soon joined in (it’s not just multitouch, it’s also multi-user). Within about a minute the two of us had created the following evocative image:

Several people told us it looks a lot like sand painting. It’s hard to disagree.

Fast thumb-typing for one or two hands

Wednesday, May 25th, 2011

Several people commented that it would be good to have a fast thumb-typing for SmartPhones that can work with a single hand, in case you don’t have two hands free.

The following variant seems to do the trick: The 14 fast characters are the same, but the other characters are “typed” not by chording, but rather by sliding off the original virtual key in one of the principal four directions.

Click on the image below to link to a java applet that shows the idea:

Using this approach, you can pick up a lot of extra speed by using both thumbs, but everything still works (albeit more slowly) with a single thumb.

Optimized non-visual thumb-chording

Tuesday, May 24th, 2011

If you start from scratch, aiming to create the most efficient possible thumb-chording that does not require looking at your SmartPhone, you can do a lot better than Braille. For example, let’s start with the fact that over 75% of all characters typed are one of SPACE,E,T,A,O,N,R,I,S,H,D,L,C,U (roughly in descending order of frequency).

We can use the entire SmartPhone screen to lay out a 5×3 thumb keyboard with the following arrangement:


This keyboard has already taken care of more than 3/4 of all the characters you’ll ever type (we include two ways to type SPACE because it’s by far the most frequently typed character).

We can then get an additional 75 additional characters by “chording” with both thumbs at once, with the left thumb on one of the three left-most columns, and the right thumb on one of the three right-most columns. 75 characters is far more than needed to type all the letters, digits, punctuation and special characters on a keyboard.

I think this keyboard is going to be very fast to type on. Since all the other characters after those first 14 are used much less frequently, having them as two-thumb chords won’t slow you down very much.

Not only does this arrangement allow non-sighted people to type quickly, but it can also be used by anyone in a situation where you need to type quickly into your cellphone without looking at it (eg: during a meeting).

As is usual for keyboards for the non-sighted, there would be a tutorial mode that allows you to drag your thumbs over the keyboard to sound out each character.

Braille thumb-chording

Monday, May 23rd, 2011

About two years ago I wrote a series of posts here about using Braille on SmartPhones. Then just today I had a conversation with a visually impaired colleague, and discovered that there are still no fast text entry methods on SmartPhones if you are blind. Yes, there are text entry methods, but no fast ones. So I proposed the following approach, which I am going to try to implement on my handy dandy new Android phone.

Any Braille character consists of some combination of three dots on a left column, and some combination of three dots on a right column. For example, below are the 26 letters of the alphabet. Other six-dot combinations represent punctuation, numbers, and special characters:

I propose a thumb-chording method where your left thumb touches an on-screen button showing the dot combination for the left column, while your right thumb does the same for the right column. For characters where only one column has any dots, just use one thumb.

You can arrange these on-screen buttons as follows (in the image below, red represents buttons for the left thumb, blue for the right thumb):

This paragraph is for non-sighted people reading this: There are three rows, with five buttons in each row. Top row: L100 L010 L001 R111 R101. Middle row: L110 L011 unused R011 R110. Bottom row: L101 L111 R001 R010 R100.

Because there are so few buttons, each button is easy to find with your thumb, once you practice a bit (and that unused middle space could be used for meta-commands). In the actual app, you’ll be able to learn the button positions by dragging a thumb around, and the phone will sound out which button that thumb is over (and then you tap with your thumbs to actually type characters).

Painting with Kinect

Sunday, May 22nd, 2011

This evening I finally got around to implementing a simple paint program with the Microsoft Kinect. I’m not using it the way you’re “supposed” to. For one thing, I’m pointing it down at a table from above, rather than forward toward me. For another, I’m ignoring all of Microsoft’s cool body recognition software, and instead using it just as a 3D camera (that is, a camera that gives you both color and distance).

After an evening of hacking, my paint program (well, it’s really more of a finger painting program) is still pretty crude. There’s no color yet, just black and white, like drawing with charcoal. But it does detect pressure, and it has my three favorite operations: add white, add black, and smudge. And of course it’s multitouch. :-)

You can see from the first image below that the program does a pretty good job of detecting pressure (note the progression from the “1” to the “4”):

One cool thing about using the Kinect is that I’m getting an enormous amount of hand-shape information from that 3D camera. If I change the way I hold my hand, then instead of drawing white, fingers that are touching the surface will draw black:

If I hold my hand yet a third way, fingers that are touching the surface will smudge the drawing:

It’s not yet clear to me what are good hand positions for each of these things. It’s a very rich space. For example, I can use the 3D camera image to figure out the difference between straight and curled fingers, knuckle up or knuckle flat, wrist up or down, or hand leaning to one side. There is such a wealth of possibility to choose from. And I haven’t even gotten started on gestures that use both hands, which will allow a far greater wealth of expressive possibilities.

This painting program is really just a way to get started on “pointing a Kinect down at a table” interactions. What I really want to do this summer is create ways to puppeteer interactive virtual worlds via hand gestures — from music to character animation to lighting and camera cues.

Looks like it’s going to be a fun summer. :-)