Obvious question of the day

Why, in K-12 education, are kids not taught how to draw with the same level of seriousness with which they are taught reading and writing and math and history? Clearly, drawing is a skill that will empower them, if they could only do it well.

Sadly, I have seen so many otherwise incredibly capable young people not able to express themselves visually. Isn’t this a basic educational right of everyone?

A really expressive Martian

I attended a panel discussion at SIGGRAPH today that compared volumetric capture of people with computer synthesis of people. Both are ways of creating the appearance of a person in virtual reality or in a movie from a dynamic point of view.

The first uses an array of cameras to capture a real person from many different angles, so you can later move around a virtual camera to see that person from any chosen vantage point. The second is a computer graphic creation of a virtual person, which can be modified to look like anybody you wish — including people who don’t exist.

I am not sure it is fair to compare the two. It’s kind of like asking somebody to compare live action movies with animated movies. Each is good at a different thing.

I think it will be a long time, if ever, before synthetic humans are able to express all the emotional subtlety of a real person. On the other hand, volumetric capture is never going to let us create a really expressive Martian — or, for that matter, a really expressive Bugs Bunny.

Real Year Live

We spent months preparing for our Siggraph 2021 Real Time Live (RTL) demo, and then our “show” lasted less than six minutes. Hopefully people out there will respond to our message.

But of equal importance, the compressed message we sent out reflects a larger message that we are communicating to ourselves. For RTL, our lab needed to boil things down to an essential core set of principles, and then show those principles in action.

We now have the mandate of expanding out from those core principles to an entire research agenda. Given that we have the entire resources of a lab, with really smart people, what do we want to get done in the coming year?

One way for us to think about this is to envision what we might show in the course of 6 minutes at the RTL session of Siggraph 2022. What will we show, and what will we be trying to say?

There are worse ways to chart out a year of research.

Real Time Live

I just participated in the Siggraph 2021 Real Time Live session. What a blast!

We had six minutes to get our message across to our community through a live demo. Instead of showing some large software system (which is what most other people did), I decided to mostly show a little procedural animation/modeling/texturing program that I’ve been working on recently, and how it connects with a great research framework created by our students.

I think it was a good choice, because our lab isn’t about building large systems, but rather about building and sharing small and lightweight prototypes of what is possible.

Hopefully the right people will get the message, and we will end up collaborating with them!

Being there

I just finished with the Q&A session of our Siggraph 2021 course. The conference this year, not surprisingly, is being held entirely on-line.

There is still something very weird about doing a “live” session on-line, even after a year and a half of this craziness. There is simply not the same level of personal bonding that you get when people are all gathered in a physical room.

I wonder when, if ever, we will get the point where on-line gatherings have that feeling we cherish, when we are all in the same room, of “being there”.

Blended editor

There are things I like about typing, and things I like about dictating text. The latter has recently become far easier and more accurate, thanks to Google.

Ideally I would like to be able to seamlessly blend together the two modes. Each has complementary advantages, and the right combination of the two could be incredibly productive.

I haven’t seen anybody make a combined system of typing + dictation work truly well. From what I have seen, you are left to simply choose either one or the other — the two modalities don’t really know about each other in a way that lets them work well together.

There is an opportunity here. And after we get it working well for natural language text, let’s see if we can extend it to programming languages.

Different kinds of blended reality

There once was only one kind of reality, before the invention of things like books, telephones, movies, etc. But now we are used to reality being experienced in many different ways.

There is a relatively new concept now, called blended reality, in which you mix together direct perception through your own eyes and ears with perception of the world around you mediated by computer. One relatively primitive example of this is an augmented reality app on your smartphone.

More sophisticated examples are coming down the pike, as fully functional smart glasses start to enter the consumer space in a few years. And that is going to make blended reality a lot more interesting.

Just as there used to be only one way to talk to somebody and one way to read something, but now there are many ways to do both, our collective perception of reality is going to undergo an interesting set of upheavals.

I don’t know about you, but I am looking forward to whatever comes next.

Life is what happens

Dealing recently with some unexpected bad news, I became newly curious about the phrase “Life is what happens to you while you’re busy making other plans.”

Most people these days know it as a quote from John Lennon. He sings it at one point in his song Beautiful Boy (Darling Boy) on the Double Fantasy album.

Lennon actually got it from Readers Digest, which in 1957 attributed the quote to the cartoonist Allen Saunders, in the slightly different form “Life is what happens to us while we are making other plans.” But other than that one mention in Readers Digest, there is no record that Saunders actually said this.

The pithiest version I know of this wise thought goes back to the original Yiddish: “Der mentsh trakht un Got lakht.” Roughly translated, it means “Man plans, God laughs.”

I like the original version best.