Second Second Life

My post yesterday, asking what would happen to our body image as people move their physical existence into the virtual world, treated this as a far-off Sci-Fi possibility. But in fact this is a question that has relevance in the here and now.

In various recent discussions I’ve had with friends and colleagues, the idea has continued to surface that the acquisition of Oculus Rift by Facebook has a specific purpose: To create a modern spin on Second Life.

Unlike Linden Lab’s original creation, which never really took off as a mainstream product (although it did capture the imagination of many), Facebook is about as mainstream as it gets. Also, the biggest problem in Second Life (as well as its predecessors, such as “The Palace”) — what to do once you get there — already has many ready answers in the Facebook universe.

In a possible Facebook reboot of the concept, you would still be able to trade news, photos and quips with your friends, except now you could be doing these things while interacting with your friends’ 3D avatars.

Since people also use Facebook for serious things, it will be interesting to see what sorts of avatars will be created. Appearing as a ten foot tall hot pink one horned panda bear may be fun, but it might not be the best strategy for closing a business deal.

As we move our public selves on-line, we may very well end up opting for something distinctly human in appearance, even if that’s less fun.

Body image in the virtual world

Let us say, hypothetically, that those predicting the Singularity are correct, and one day our brains are all uploaded into computers. Sort of like “The Matrix”, but without that pesky Agent Smith and his friends using us as batteries.

At that point, we would presumably experience our bodies only in a virtual sense. Our faces, hands, feet and other bodily parts would exist in our minds only as cybernetic simulacra of themselves.

I would be curious to learn whether, in such a world, our body image would drift over time. Would we allow ourselves to become translucent, to fly above the treetops, to teleport instantly between locations? Or would our uploaded brains reject such options, or any reality that radically deviates from the last several million years of evolutionary development?

I realize that in shared virtual worlds used for entertainment, such as Linden Labs’ “Second Life”, the laws that govern our physical bodies are suspended on a routine basis. Yet we don’t actually live in those worlds — they do not encompass the entirety of our sensory experience.

If we were to migrate our existence entirely into cyberspace, just how far would our virtual selves drift from the sensed experience of this everyday reality? Or would our brains ultimately reject radical change, opting instead for the biologically evolved familiar?

Blue greenhorn

Continuing the two word challenge (see my April 30 post), this time my friend specified the words in the title of this post.

The challenge was the same — to spin those two words into a tale. Below is the story I came up with.

-KP

_______
 

“Blue,” he said, “before you even ask.”

“Is that your favorite color?”

He shook his head “My mood. Moods are colors,” he explained, “and mine is the color blue.” He stared into his whiskey glass.

“Is there anything I can do to cheer you up?”

Turning his attention from the drink in his hand, he took a good look at her. “You’re pretty.”

She smiled. “I’m glad you noticed.”

“So how come you’re talking to me?”

“I’ve been watching you from across the bar, and you look like a man who could use some cheering up.”

He put down the glass and turned to face her. “I think it’s working. But why pick me, with all this collective misery to choose from?” He looked around the bar.

“These others, they’re old pros at being miserable.”

“But not me?”

“No, not you. You’re a greenhorn. I can tell these things.”

“You have magical powers?”

“Just one. I’m a gal. We’re good at that kind of stuff.”

“Maybe,” he shook his head, “or maybe we guys are just bad at that kind of stuff. Still, I’m happy to report that I feel better already.”

“Ah,” she smiled, “Validation of my magical powers.”

“Do you have any other magical abilities? Could you actually guess my
favorite color?”

She laughed. “I’ve been known to read minds, but I try not to abuse that particular power. My boyfriend says it gives me an unfair advantage.”

He picked up his glass, and regarded it silently.

“So?” she asked, “What’s your favorite color?”

He stared into his drink for a long time before answering. “Blue.”

Rewind / replay

Do cars have to look like cars? To can openers necessarily resemble can openers? I’ve been pondering this question recently.

Humans, for at least the last 30,000 years or so, have not changed in any meaningful evolutionary sense. There has been plenty of micro-evolution these last 300 centuries, but that has mostly consisted of shuffling around and expressing genes we already had in our DNA.

If you were to place some humans into an alternate Earth-like environment — same brains and bodies, same gravity, nutrients and weather conditions — and let them culturally evolve over several hundred centuries, would they end up re-creating essentially the same technological artifacts?

Is there something inherent in the requirements for an automobile that forces it to converge on a narrow range of designs? And what about trains and airplanes, motion pictures, clothing, coat hangers, room furniture, refrigerators?

In science fiction stories we can see many alternatives to the designs converged upon by modern society. And yet those alternatives reflect only their authors’ imaginings, not the actual artifacts of any known civilization.

There are so many pragmatic and utilitarian constraints on human made objects that I wonder — was the design of the automobile, in our current state of technological development, inevitable? Are all versions of sound recording destined to progress from wax cylinder to vinyl record to CD to bits in the Cloud, if civilization were to rewind and replay again?

Hikaru meets Hiroshi

Yesterday, where I’ve been visiting at the MIT Media Lab, we got a visit from George Takei, who has been going around the world with a TV crew to showcase cool research. My brilliant colleague Hiroshi Ishii showed him their group’s spatially transforming inFORM dynamic shape display, and George had a great conversation about the research with Hiroshi and his Ph.D. students.

As I watched this, I couldn’t help thinking about Lieutenant Hikaru Sulu, the senior helmsman on the U.S.S. Enterprise, an iconic role famously played by Takei. When Star Trek originally aired, a Japanese character who was a respected scientist and a man of peace was unheard of on American television — it was truly a groundbreaking role, which defiantly smashed prejudices.

And now here he was — the man who had portrayed a brilliant Japanese scientist hero for a generation of Americans — encountering the exciting research of an actual brilliant Japanese scientist hero in today’s America. Something about this made me very happy.

I had a nice conversation with George afterward. He really liked the way Hiroshi’s work goes far beyond mere pixels on a screen, to interfaces that literally transform in shape.

As we stood next to one of Hiroshi’s spatially transforming interfaces, I told George “I probably shouldn’t say this, but seeing you interact with this futuristic 3D device, I just keep thinking ‘Space — the final frontier.'”

George laughed and said, “I can hear the theme music.”

Acceptable magic

A friend just pointed me to this wonderful 1993 paper by Bruce Tognazzini on applying principles of stage magic to user interface design.

As it happens, I’m in the midst of designing a novel kind of user interface that makes extensive use of “magic”. That is, it relies on the person who sees the interface buying into a consistent pseudo-reality that is quite different from the actual computation going on under the hood.

Needless to say, I found Tognazzini’s paper very relevant. In fact, he mentions a classic book on stage magic — “Magic and Showmanship, A Handbookfor Conjurers,” by Henning Nelms — which I immediately ordered on Amazon (I believe it is arriving tomorrow).

It’s a tricky space, because it all needs to consensual. You can’t simply lie to people, but you can create for them a fantasy that ultimately serves their purpose. After all, people don’t want or need to be reminded that Rick Blaine in “Casablanca” is really Humphrey Bogart, reciting words written by various other people.

Rather, viewers can only go on the deep psychological journey the film offers if they accept Bogie as Rick, and Ingrid Bergman as Ilsa Lund. In this case, truth can be received only through fiction.

In a sense, everything we do with computers exists within an analogous illusion. At bottom, the binary logic by which computers work is vastly user-unfriendly. The fantasy offered by the computer/user interface is not merely desirable, but necessary.

So what is the right level of acceptable magic in computer/human interfaces? Maybe the only way to answer this question is to try out your tricks on an actual audience. That may be the only way to understand when they feel lied to, and when they appreciate the rabbit coming out of the hat.

Generative versus contingent

During one of the technical papers sessions at the recent SIGCHI (Special Interest Group on Computer / Human Interfaces) conference, two papers stood out for me because they represented perfectly opposite philosophies.

One paper looked for — and found — all the special cases where real world tools could be effectively mimicked by putting your fingers on a multitouch surface (eg: an iPad) as though you were using that tool. Some tools, like a tape measure, computer mouse, pen and eraser, can be mimicked very well on a tablet. Others, like scissors, cannot.

The authors found seven real-world tools that could be mimicked beautifully in this way, and after their live demo they got a big spontaneous round of applause from the audience.

But another paper took the opposite approach. The authors asked “what are the hand and finger gestures that are inherently powerful and expressive on a multitouch surface?” Essentially they game up with a grammar — a way of building an extremely large and extensible vocabulary of hand gestures that nobody had ever tried before.

The power of the first approach is that it is contingent: It works because there are particular real world tools that happen to map into finger positions on a multitouch screen. The power of the second approach is that it is generative: It builds from the inherent richness of what a hand and fingers can express when interacting with a flat surface.

The paper that relied on contingency was more of a crowd pleaser. But in the long run, my money is on the generative approach.

Insight from errors

I was having a discussion with a colleague this week about common errors kids make in math.

Here’s one that was used as an example:

Mistaking:

A + B

C + D

for:

A/C + B/D

It’s an elementary mistake — the two expressions are actually very different.

But not always. What if we considered just the class of numbers for which those two statements always product the same results?

That might turn out to be an interesting set of numbers in all sorts of ways.

And what if we were to play the same game with other erroneous equations?

One of those equations could just lead us to some actual math — and maybe that actual math might be very cool.

Human-compressed algorithms

During a discussion today about artificial intelligence, I had an odd thought.

People who study computer science know that any algorithm takes a certain minimum amount of space to write. Beyond that, you just can’t make the algorithm any smaller. Some algorithms are very simple and take up little space, whereas others are huge and complex, yet the description of any given algorithm can only be compressed so much, and no more.

But what if you used people to compress algorithms? Instead of writing out the algorithm in a formal program, so that a computer can execute it step by step, suppose you just describe the algorithm to another person, and then let them write the computer code?

The resulting description might be a lot smaller. Of course it’s not the same — we are now relying on human brains to fill in the missing bits. Fortunately, there are about seven billion human brains in the world, and many of them are quite good at this sort of thing.

For example, here is how I might describe a bubble sort to a friend: “Repeatedly go down a list, swapping pairs of items that are not in order. Stop when nothing is swapped.”

If my friend knows how to program, she can turn that description into code. She would need to add in lots of bothersome details (loop constructs, iterating variables, conditionals, array declaration, etc) — none of which were in my original description. But any decent programmer would know how to do that.

Wouldn’t it be cool if this leads to a simpler way to describe algorithms?

Turing test for cities

I had a lovely time spending a week in Toronto. The week before that, I had a nice visit to Stuttgart Germany. Now I am back in NYC.

The cultural difference between Stuttgart and NY was obvious. It’s not a question of “better” or “worse”, just an observation that the cultures are strikingly different. Even if there were no difference in language or accent, I would be able to very quickly tell which was which.

For example, in Stuttgart people follow rules quite strictly — even in cases where those rules make no logical sense. It seems that the idea of following a rule is more important than the reality of its effect in any particular case. Another way of saying this is that even informal rules appear to operate with the force of law.

Of course NY is completely different. People here will follow a rule only if they think that rule makes sense. In fact, a group of people will often collectively — not just individually — agree to break a rule (or a law) if they think the situation warrants it.

To my mind Toronto feels a lot more like NY than like Stuttgart. People approach situations, well, situationally. As people negotiate the city, a lot of common sense flexibility and reasoning goes on, and I like that.

Of course there are cultural differences between NY and Toronto. So I wonder, how long would you need to hang out in one of the two cities before you could tell which culture you were in (assuming you couldn’t cheat and use clues like famous buildings or differences in spelling or pronunciation).

Between NY and Stuttgart, this kind of “Turing test for cities” would be very easy to solve. Between NY and Toronto, I suspect it would take somewhat longer.

I wonder what would give it away first.