Rearranging the furniture

We generally don’t pick out all of the furniture for our new home or apartment before we get there. After all, we might get it wrong.

Sure, there are software tools out there that let you do this sort of thing. You can place computer graphic versions of your bed, your sofa or your nightstand around the room, try different lighting arrangements, and see the result from various angles at different times of day.

Of course that is different from really being in the room. A picture of a room is not the same as being in the room itself.

Yet that might change. At some point you will take for granted the ability to slip on your lightweight hologlasses, which will let you have the experience of being in any room you like. Unlike today’s high end room-scale VR platforms, you will be able to wander around freely, unencumbered by wires or cables.

You might very well find yourself picking out the furniture you like, secure in the knowledge that there will be no unpleasant visual surprises when you walk into the room itself.

In fact, even after you are living somewhere, you might, on occasion, choose to pop on your hologlasses and virtually rearrange the furniture. The next time you come back into that room, unobtrusive robots will have already shifted around the physical objects in the room to match your new preference.

You might even store some favorite room arrangements, to fit your different moods and preferences. One day things are set up for studying, the next day for that dinner party you’ve been meaning to throw.

I can’t wait! 🙂

A timely wrinkle

Today I was with a group of people I’d never met before. One thing we all had in common was a love of science. As we talked, it gradually came out that we all had a particular cultural touchstone in common: Our love for A Wrinkle in Time.

In fact, we had all read it when we were little, and had been drawn by its mysteries into wanting to learn about math, science, physics, and all the various grown-up topics that Madeleine L’Engel ingeniously referenced in what was ostensibly a children’s story. At a moment when people are becoming very interested in all forms of virtual reality, her book seems especially relevant.

To me there was always something iconic about the scene where Mrs. Who and Mrs. Whatsit demonstrate how the “wrinkle in time” of the title actually works — so clear and child-friendly, yet so deep in its implications. Looking now from the perspective of adulthood, I understand now that the story contains little actual science or math. It’s pretty much all suggested by metaphor.

But as a young boy, I became completely lost in the mystery and wonder of the Universe, as seen through the eyes of Meg and Charles Wallace Murry. Reading this book made me want to learn about science and math, so I could go on my own journey of mystery and wonder.

Which I think is a pretty good effect for a book to have on a child.

Who does that?

I am so relieved. Somehow I had been worried that during this last presidential debate, some body snatcher might temporarily replace the eighth grade bully on the Republican side of the stage by a convincing semblance of an adult candidate.

But that did not happen. On the right side of our TV screen we saw a calm, articulate, highly competent, well informed grown-up addressing the issues with wit and depth, expressing her views in clear detail and with consistent intellectual force.

On the left side of our TV screen we saw a petulant eighth grade bully. There is something unnerving about seeing a seventy year old would-be statesman whose game is pretty much on the level of a fourteen year boy accustomed to beating up his classmates for their lunch money.

I felt bad for the millions of Trump supporters who had to watch that performance. I felt even worse for them when their standard bearer engaged in something even more disturbing: Suggesting that he might not accept defeat, and thereby attacking the electoral process itself.

As Hillary Clinton said tonight, in a different context: “I mean, who does that?”

Pure joy

I’ve been working on a math / programming problem the last few days. It has required me to work out lots of different problems, and implement various image-analysis algorithms.

I’ve needed to figure out how to compute curves from discrete points, do sub-pixel sampling, adapt a marching-squares algorithm, construct optimal histograms and then analyze them, create graphical visualizations of potential sources of deviation from the mean, and employ various other complementary techniques. Since errors are cumulative, every step has needed to be very precise, with very little wiggle room.

You can sometimes learn a lot about yourself, just by how you respond emotionally to such challenges. What I’ve learned is that I am having the time of my life. In fact, it has all been pure joy.

Which is a good thing to know.


Around twenty years ago, a new phrase entered the computer-tech culture: “Graphics Processing Unit”, or GPU. The general currency of this term started when the high end computer graphics of th ’90s (Silicon Graphics in particular) had officially become a dinosaur, and so was duly shoved aside by the warm furry mammals of low end commodity hardware, such as nVidia, ATI, and their competitors.

We now take hardware-accelerated 3D graphics for granted in all our information devices, including our phones. It would be unheard of, in this day and age, for a SmartPhone to not have a GPU.

As we make the transition to wearables, something similar is about to happen with computer vision. A dedicated co-processor — a Vision Processing Unit, if you will, or VPU — entirely devoted to figuring out what you are seeing when you look out into the world, will soon be part of every consumer-level information device.

As soon as we put on our cyber-glasses, we will able to tell where we are and who we are looking at. Inferenced based on real-time object recognition of doors, bottles, furniture, cars, or whatever, will be taken for granted by a young generation that will never have known anything else.

Sixteen years ago, the new Millenium brought with it the age of the GPU: an era of affordable consumer-level high performance 3D computer graphics that is now in every phone and laptop computer. We are now about to enter the age of the VPU. Whatever we look at, our wearable device will recognize it, and will help us figure out what to do about it.

In not so many years, a generation will come of age that will think of this not as something magical, but simply as reality.

Capitalism and virtue

If you start a company, you can say: “We are creating a company, and if it is successful, then we will make money.” That’s a perfectly reasonable thing to say, and it’s probably correct. Yet it misses an essential truth.

Suppose instead you say: “We are creating a company, and if it is successful, then what we are doing will be self-sustaining.” You are still describing the same endeavor, but now you’ve shifted the emphasis to a far more useful description of the process.

After all, the purpose of a company isn’t really to make you money — although it might end up making you a lot of money. The purpose of a company is to generate value in the world in a way that is self-sustainable.

Your company is in the business of providing a product or service. If people find that product or service to be useful, then they will pay for it. The revenue generated by that customer demand allows your company to keep going. The product or service thereby becomes stable and self-sustaining. Everybody wins.

Note that if you get greedy and take too much money out of the company to give to yourself, then the company can stop being self-sustaining. Such a short-sighted strategy might win you more money up front, but in the long term everybody loses, both you and your customers.

We sometimes reflexively associate capitalism with greed. Yet capitalism, when done properly, can create good in the world.

Canadian hypercubes

My recent trip to Canada got me thinking about Canadians and hypercubes. I realize that may sound like an odd association, but I have empirical evidence to back it up.

When I visited Montreal a while back, and had the good fortune to get a tour of the National Film Board, I saw a large mural drawn by Norman McLaren himself — the pioneering and enormously influential animator. In fact, the building I was visiting was named for him.

As part of this mural, McLaren had drawn a line, then a square, then a cube, then a hypercube. Essentially, a progression of “cubes” in successively higher dimensions. And then he continued the visual sequence on to even higher dimensions.

I was reminded of this mural when I visited Autodesk in Toronto. One the many brilliant people who works there is Jos Stam — a genial giant, and the genius responsible for some key computer animation techniques you see in movies.

On the wall outside his office, he had drawn a line, then a square, then a cube, then a hypercube … continuing onward to seven dimensions. When I met with him, I meant to ask him whether he had seen McLaren’s work at the NFB. But once we got to talking, our conversation quickly wandered to so many topics that I completely forgot about my question.

Interestingly, I have my own association with Canada and hypercubes. Back in 2010 I built one of the first things I ever made on a 3D printer, during a summer I spent at the Banff Centre for the Arts.

It was a zoetrope of a tumbling hypercube. In a way it contained five dimensions, since it expressed four spatial dimensions plus time. Here is the blog post I wrote about it.

Maybe easier access to higher dimensions is just one of those Canadian things, like their easier access to affordable healthcare.

Chalktalk to China

Here in New York City, I gave a talk this evening to a very large audience at NYU Shanghai. To them it was morning, since there is a 12 hour time difference between our respective cities.

The talk went very well, and I used my interactive Chalktalk program throughout. The audience could see my face and hands, and they could also see pictures I was drawing, and watch those pictures come to life as I described various ideas about the future.

What was cool to me about the process was that I was able to use an interactive style of presentation, from halfway around the world, that mimics what ordinary conversation might be like in the future. One day we will simply draw in the air as we converse, and those drawings will spring to animated life, in ways that today would seem completely magical.

I believe that this future way of communicating will help us to bridge the gap between talking in person and talking with people who are far away. As advanced technology enables even casual language is able to become more visual, we will experience some of the rich and expressive interactions over great distances that today we share with people who are in the same room.

After all, this planet of ours is really very small and fragile. Hopefully these future ways of communicating will help to remind us just how close to each other we really are, and maybe that will help us to view people in other parts of the world with greater kindness and empathy.

Two orthoginal dimensions

As I have conversations with very smart people about the effects of advancing media technology, I am starting to see a pattern to the conclusions we draw. Essentially, there are two orthoginal dimensions.

On one dimension lies the ever-advancing course of technology. Over time, we develop new ways to express ourselves, to communicate and build culture. A while back there was moveable type. Then came radio, cinema, television, the Web, and SmartPhones, with wearables just around the corner.

On the other dimension lies everything that is intrinsically human, and that will always be human: Love, hate, loyalty, jealousy, tribalism and betrayal. The entire panoply of human experience has existed for millenia, and nothing about our modern technology can move it by even an inch.

It is an apparent contradiction: The ever evolving landscape of new forms of technology-enabled expression on the one hand, and the unchanging landscape of the essential human condition on the other.

I don’t think of this as something negative. Rather, I find it heartening that we continue to be who and what we have always been.

It tells me something very important: Regardless of where the twists and turns of ever evolving future technologies may lead us, we will always be recognizably human.

The “Lonesome” Rhodes moment

The analogy between the Trump campaign and A Face in the Crowd was never perfect. “Lonesome” Rhodes presented himself to the public as a sweet, benevolent force. Only those who knew him in person were aware that he was a shallow, power seeking narcissist.

Trump’s strength throughout this campaign has always seemed to be that he could, in fact, publicly present himself as a shallow, power seeking narcissist, and many people would still support his candidacy. It was a kind of super power.

But it seems that Trump has finally reached his “Lonesome” Rhodes moment. His recorded private comments in 2005 were so sickening, so out and out hateful and demeaning to women, that a line is finally being drawn.

Now it is coming out that he enjoyed walking in on naked 15 year old girls against their will — and publicly bragged on the Howard Stern show about his power to do so with impunity — it’s possible that the trend will accelerate. People who were holding their noses until now will flee from the sheer stench of it all, in ever greater numbers.

And not a moment too soon.