Future fashion

Humans from the CroMagnon age would probably be astonished at the appearance of modern humans walking about in public. Several tens of thousands of years of cultural advancement have resulted in a luxury of options for artificial outer skins.

We think nothing of walking down the street with today’s colorful choice of plumage: Red or yellow or chartreuse, mixing and matching materials and styles as we like. We think nothing of pulling items out the closet to choose our avatar for the day.

In on-line fantasy worlds, such as Second Life, people go much further. They walk about as giant cats or lizards, robots or ghostly wraiths, choosing an arbitrary appearance at will.

One would think that this might be a model for how fashion will advance after we are all wearing those cyber-enabled contacts or lens implants. But I am not so sure.

Second Life is, as it’s name suggests, meant to be an amusing alternative to real life, not a replacement for it. When you physically go about the world, you are always implicitly voting with your one and only body, your most truly precious and irreplaceable asset.

There is less room for fooling around not because of any limitation on technology, but because of the social and cultural implications of how you present your true self — or as true a self as one ever presents in public.

Even in that future time when we see each other as virtual versions of ourselves, there will be limits to how we will appear. These limits will not come from technology, but rather by our need to be taken seriously when it really matters.

Yet in that future reality there will be times during our day when we are really just out to have fun. In those moments, with the flip of a virtual switch, we may choose to slip into a virtual appearance that’s a little more fun.

Show and tell

Today I went to Jaron Lanier’s house, and we traded demos. He showed me his favorite demos on the Hololens. Since Jaron was one of the key drivers of that project, it was particularly interesting to see which demos he liked the best.

The Hololens is a magnificent piece of engineering. The way it tracks the world around you to superimpose 3D graphics, using three different technologies in tandem (depth camera, edge tracking vision algorithm and inertial sensor) is a thing of beauty.

I hadn’t seen the Hololens since last summer, and I could really see that an entire team of people have been working hard to build demos for it. In general, everything noe looks more polished and better thought out, and the interaction is much smoother and more intuitive than it was ten months ago.

In return, I showed Jaron pretty much the opposite demo. Not AR, but VR. No fancy cutting edge hardware, but just a SmartPhone (with a pair of Wearality lenses). Not software written by a large team of designers and software experts, but just a VR-ready 3D modeler that I wrote myself in HTML5 in a few hundred lines of Javascript, running in a web browser.

By analogy, Jaron was showing me his fancy Tesla, and I was showing him the little electric car that I’d built from scratch on weekends in my garage. We both had a great time.

More uses of failure

After building an entire optical simulation system yesterday which showed me that my theory was wrong, I started to rethink what I was trying to do. Seeing where the rays of light actually went gave me a much better understanding of how such a system really operates.

So today I jumped back in with a new approach to the problem. Fortunately I already had the tools built from yesterday, so the new approach took much less time. And this new approach worked!

Whether it will actually work in the real world is a whole other question. There are issues of materials, manufacturing processes and tolerances to work through, so even if the basic mathematical model itself checks out, the thing might still not be practically buildable.

But now, thanks to the insight I got from trying and failing, I’ve managed to go from “theoretically, this won’t work” to “theoretically this could work just fine.” And that’s progress of a sort.

The uses of failure

As many have noted before, there isn’t a reputable scientific journal where you can report failures. Which is a shame, because failures are an extremely important part of scientific progress.

Today I spent much of the day testing a theory about optics. To do the required experiments, I needed to implement a particular kind of ray tracing program, as well as the math to support ray tracing to various sorts of curved surfaces.

After several hours of coding and testing, I realized that my theory was wrong. But I also understood why it was wrong, and quite a bit else besides.

Not only that, but I now have a handy dandy suite of software tools for testing and visualizing all sorts of optical ideas. Which means that the next time I have some crazy theory about optics, I’ll just be able to jump in and test it, without having to build all the machinery first.

Someday one of those theories is going to be correct, and maybe even useful. Whatever that future discover may be, it’s something I’ll be much more likely to find using my shiny new software laboratory, built on failure.

Grasping the future

Most people are born with two arms and hands, with each hand containing four fingers plus an opposable thumb. Those hands are a remarkably flexible and protean way of interacting with the world.

From the time we are little we develop a powerful sense of proprioception. You can generally reach for something with wonderful accuracy even when your eyes are closed. After all, your brain has spent years and years learning your body, in all of its possible configurations.

But as technology starts to allow physical experience to become more visually virtualized, and we begin to make ever greater use of unseen helper robots, it might be possible to tinker with this basic architecture. Not by changing our physical armd and hands, but by changing our mind’s perception of them.

We might find it convenient, for example, to reach across a room to pick something up. We wouldn’t actually be stretching our arms across the room, but the illusion that we are doing so might be achieved by a combination of cyber-modified vision and helper robots that can pick up distant objects and place them in our grasp.

Today’s technologies allow us to do “impossible” things all the time. We move our bodies across oceans in a matter of hours, chat with people on the other side of the planet, and jot down our thoughts in a way that is potentially readable by billions of people (just as I am doing now). Because we are so used to performing these miraculous feats, we don’t realize how remarkable they are.

Similarly, emerging technologies will eventually allow us to reach across a room and pick up an object fifty feet away. And one day kids will be born into a world where that sort of thing is commonplace. They will think nothing of it.

YouTube comments

In order for comments to show up on this blog, I need to approve the commenter. So if I’ve already approved a comment from you, then your subsequent comments will automatically be approved.

Which means that from time to time I get a comment from someone new. If a comment is not blatant spam, it’s usually a really cool comment, and I happily approve it. But every once in a while, not so much.

Just in the last day I got a comment that was truly offensive. Not so much because it was trying to be offensive, but because the person writing it was attempting make a joke based on puerile ignorance. It was the sort of nasty uninformed comment whose validity would not have survived a three second Google search.

And I didn’t approve it. Because this is, after all, a curated space, intended for respectful and informed discussion, and I don’t want that sort of ugly energy here.

That’s what YouTube comments are for.

Ideas and demos

Today I spent much of the day trading quips, ideas and demos with Vi Hart. And maybe a little scotch.

As the words and visuals went back and forth between us, I realized that she and I share a certain technique. We both look with great seriousness upon the concepts and inspirations that we first thought about as children, and use them in our work.

Of course that is not always an easy thing to do when you are a grownup, surrounded daily by the voices and thoughts of grownups. But it’s not impossible.

And I find that talking with Vi makes it a lot easier.

Great quote

I am staying this evening at the lovely home of my dear old friends Ted and Ellen in Palo Alto. When I first met them, long ago, they were living in New York City. Midtown Manhattan, no less.

One day Ted was offered a fabulous job in California that he really couldn’t refuse, and so they needed to leave New York. We were talking this evening about that time years ago when they were about to depart New York, and had marked the occasion by convening one last gathering with their good friends in Manhattan.

This evening I reminded Ellen of something she’d said to me on that day long ago. “How does it feel,” I had asked her then, “to be leaving New York City.”

She’d thought about it, and had responded with one of the most quotable lines I’ve ever heard. “When you leave New York,” she told me, “you’re not going anywhere.”

Diegetic prototyping as design methodology

Today, while attending a SIGCHI session, I learned of the term “diegetic prototype”, originally coined by David A. Kirby. It’s a concept with which I am very familiar, but I hadn’t actually known there was a name for it.

The basic idea is the “working example” of a future technology that you see in a science fiction movie. Some notable examples: the force field in Forbidden Plant, the Star Trek transporter, the robots in Star Wars, the flying skateboard in Back to the Future, the gestural interface in Minority Report and the interactive holographic displays in the recent Iron Man movies.

More than mere fictional constructs, these are aspirational objects meant to inspire audiences, a sort of stake in the cultural ground. They hold out the possibility, however remote, of a brighter and more exciting future here in the real world.

I am amazed that it has taken me so long to learn that there is a name for this method of approaching the future, since I have essentially structured my life around it. Since childhood I have generally thought in terms of imagining some exciting possibility for the future, prototyping it first in my mind as an attainable fantasy, and then going about the task of prototyping some version of it in reality.

I think one question that can be teased out here is the one of “how real is real enough?” From the point of view of Hollywood, it is sufficient that audiences see Tony Stark playing with his holographic display up on the big screen. But to me, diegetic prototyping is just a good first step toward making things happen in the real world.

Teleportation for telepresence robots

Yesterday I wrote about an absurd encounter with telepresence robots. But then today I had an intriguing conversation about those contraptions.

A friend was telling me that an artist she knew was saying to people at this conference that they should attend the conference art show this evening. One of the people approached by this artist was “present” via one of those mobile telepresence robots.

As it happens, the gallery housing the art show is across the street from the conference center. So they ended up having a conversation about whether the robot would be capable of crossing the street to get to the gallery. Ostensibly this points to a limitation of the telepresence robots.

But suppose, I told my friend, that there were telepresence robots everywhere — much as we currently have electrical outlets everywhere. In that case, being “present” via telepresence robot would be a sort of superpower.

The telepresent person could instantly jump from the conference center to the gallery. In fact, they could instantly jump from anywhere to anywhere else.

It still wouldn’t be the same as really being there, but it might lead to some interesting new possibilities that we haven’t yet thought of.