Passthrough, part 1

The video passthrough feature of the Meta Quest 3 and Apple Vision Pro may seem like a novelty today, but I think there is a very profound principal at work here.

The concept of perceptual passthrough can be generalized in many ways. I think that these first devices are just the tip of the iceberg.

More tomorrow.

Leonardo and the Two Cultures

Today, on what would have been Leonardo DaVinci’s 572nd birthday, is a good time to talk about the two times that I saw Lester Codex.

The Lester Codex is a folio that explains, with beautiful illustrations, various theories that Leonardo had about the physical world around us. Not surprisingly, many of his theories were absolutely correct, such as his surmise that the discovery of seashells on mountaintops suggested that those mountaintops were once at the bottom of the ocean.

Many years ago, soon after Bill Gates purchased the Lester Codex for sixty million dollars, he lent it to the NY Museum of Natural History for a public exhibition. The codex was presented together with interactive Microsoft software that let you interactively explore its contents. Not surprisingly, the software was for sale.

That exhibit, which was wonderful, helped the public to experience the greatness of Leonardo the scientist.

Soon after, Gates lent the Lester Codex to the Metropolitan Museum of Art. The curators of that worthy museum put on a public exhibition in which they placed the codex in the context of many other works of art by Leonardo.

That exhibit, which was wonderful, helped the public to experience the greatness of Leonardo the artist.

I saw both exhibitions, and was struck by the implicit war at play. Two of our city’s greatest temples of culture — sitting across each other on opposite sides of Central Park — seemed to be fighting over the meaning of Leonardo’s life and work.

I also saw that this is far from a new battle. In the second exhibition, there was a letter written by an important personage of the day, which the museum curators helpfully translated from the Italian. He was complaining that Leonardo was wasting time in the frivolous pursuit of science, when he could have been spending more time on something actually important — making more paintings.

As C.P. Snow observed in his brilliant essay The Two Cultures, none of this should come as a surprise.

Cherish that

One of the wonderful things about live theater is that you are seeing something completely unique in the history of humankind.

No performance that took place before the one you are attending, or any performance ever again, will be the same as what you are experiencing right now.

Cherish that.

Real-time AI

Right now it takes at least a few seconds from the time you give a text prompt and the time you get an image back from one of the currently available generative A.I. programs. This is a limitation of the current technology.

But Moore’s Law keeps marching onward, in one way or another. With increased computational parallelism and newer semiconductor technologies, response time will gradually trend downward. In another decade or so the response will reliably arrive in a fraction of a second.

From the user’s perspective, this means that images and videos and simulated 3D scenes will appear even while you are describing them. But even more important, it will mean that you can edit those images and videos and simulated 3D scenes simply by continuing to talk. As you speak, the changes will happen right before your eyes.

When we get to that point, generative A.I. will truly become a fundamental new mode of human expression.

Drawing in the air

At some point everybody will be able to wear lightweight and affordable high quality XR glasses. A really good version of this is still some years off, but it’s fun to think about it now.

One of the things you will be able to do with those glasses is simply point your finger and draw in the air. Everybody who is in the room with you will be able to see your drawing, and they will be able to make their own drawings as well.

Of course all of this will be tied to artificial intelligence. After you draw something, you will be able to say things like “What would a couch look like right here?” or “Show me what this would look like as a real vase.” Your drawing will then come to life for everyone as something that looks realistic.

I wonder who just this one feature will change communication.

40 Years an Eclipse

Yesterday I posted an animated eclipse implemented as a procedural texture, in honor of that day’s great celestial event. It’s not a movie clip — it’s a live simulation running on your computer or phone.

Interestingly, this eclipse simulation is actually something that I created exactly 40 years ago. In 1984 I introduced to the world what came to be known as procedural shader languages, in the course of which I created lots of examples.

One of those examples was this procedurally generated eclipse. Originally it ran in my own custom shader language. Then around 18 years ago I re-implemented it in Java. More recently I re-implemented it yet again as a WebGL fragment shader.

But the design and the algorithm has never changed. It’s the same procedural eclipse that I created back in April 1984. Except that back then it took 30 minutes a frame to compute. Now it just runs on your phone in real time, thanks to the wonder of Moore’s Law.

In another 40 years, I wonder what simulations will run in real time that now take half an hour to compute. I can hardly wait!

Pianoforte

It is said that when the pianoforte was first invented, many musicians felt threatened by it. Since it was a very expensive and therefore rare instrument, most musicians had no direct experience with it. But what they heard apparently frightened them.

Because of its highly polyphonal nature, musicians were concerned that it would replace the orchestra. As we know, that did not happen.

In fact, the piano became a great stand-in for the orchestra when rehearsing operas and other music written with orchestral accompaniment in mind. So in a way, the adoption of the piano actually helped to promote orchestral music.

Something similar seems to be happening today with A.I. People are worried that it will replace human creativity. But the truth is that today’s A.I. is less like an orchestra and more like a piano.

An A.I. on its own cannot produce anything that is highly creative. It is an instrument, which in the right hands can be used to produce something extraordinary. But during this process, the person in the driver’s seat is the human who is working with the A.I. — not the machine.

In the coming years, A.I. will be a great tool for trying things out, for creating rapid initial prototypes of an an artist’s new ideas. But they cannot replace the artist, any more than Adobe Illustrator can replace the creator of a document.

These are not people — they are tools designed to support you in your own creative endeavors. Play them the way you would play a piano.