A glimpse into the future

Yesterday morning one of my students remarked that the video passthrough on the Quest 3 is almost as good as reality. I responded that one day soon, one of its descendants would give you vision that is better than reality.

You will be able to see colors that you cannot see with your own eyes. You might be able to look through a wall and see what is in the next room over. You will be able to see the bus you that want to catch from three blocks away.

When I said this, I thought I was talking about the future. But then at a dimly lit restaurant last night, I realized that the only way I could read the menu was by taking out my phone, turning on the camera, and zooming in. When I did that, the text was clear and bright and easy to read.

And it occurred to me that in a few years I won’t even need to take out my phone. Any small or dim text will be easy to read as long as I am wearing my smart glasses. And as an added bonus, my glasses will let me read text written in any language.

Diagrams in the air

I wasn’t planning on making diagrams in the air so soon. It was a stretch goal — one that I thought would take a while to accomplish.

I had a general plan to devote some time in the next few months building support software. And then I’d be able to teach algorithms by displaying interactive diagrams in extended reality (XR).

But then today I needed to work through a geometry problem to help me build something in XR. So this afternoon I found myself creating multicolored points and axes and lines of intersection, all conveniently floating right above my desk when I put on my Quest 3 headset.

This approach turned out to be the easiest way to work through the problem that I was tackling. And at some point I realized that I was making diagrams in the air.

Meta movie moments, part 3

My vote for the most meta of all meta movie moments is in the otherwise forgettable 1963 Jimmy Stewart comedy Take Her She’s Mine. The film is a comedy that isn’t all the funny, and it did not succeed at the box office.

But it has one transcendently meta moment. During a wild chase scene on a cruise ship, while Jimmy Stewart’s character is trying to avoid being seen, he suddenly finds himself being chased by a group of Japanese men with cameras.

They are all excitedly shouting “Jimmy Stewart! Jimmy Stewart!” and chasing him around the ship. Unlike everyone else in the movie, they are somehow able to see him not as the fictional character he is playing, but as his real world self.

I can’t think of a single scene in any other movie that smashes the fourth wall with such a shameless sense of triumph.

Meta movie moments, part 2

If you’re looking for a highly meta moment in a movie, there are many candidates to choose from. Julia Roberts in Ocean’s 11 as a woman whose claim to fame is that she looks like Julia Roberts, fawning over Bruce Willis because he’s a real movie star.

The main characters in Blazing Saddles riding from the old West into a movie set, while one of the characters stops into a movie theater to watch the movie Blazing Saddles.

I could go on and on — there are so many examples to choose from. But what is the most meta of meta movie moments?

More tomorrow.

Meta movie moments, part 1

Sometimes filmmakers will blur the line between the fictional world of the movie you are watching and the actual reality surrounding the creation of that movie world. I think of these as meta movie moments.

I am sure you can think of many examples. I’ve been thinking about this recently, and have been wondering, which is the most meta of all meta movie moments?

There are so many great candidates to choose from, but I think I have it narrowed down. More tomorrow.

First you need to build the house

Today I was going through a paper draft written by one of my students. There were lots of cool figures and citations and excellent formatting, but the core argument was not yet totally worked out.

And it occurred to me that writing a paper is sort of like building a house. You really shouldn’t paint the walls, put in furniture and hang up artwork until you know where all the rooms will go.

I made a draft of the student’s paper that took out everything except his core argument itself, and I invited him to work with me to figure out where to put the rooms and support beams in his house.

After that, I think it will be relatively easy for him to repaint the walls, put back the chairs and tables, and hang up the artwork.

Reminders

Getting reminders on my smartphone is a lot more convenient than the way I used to do it back in the day — scribbling little notes to myself on random pieces of paper. But we are about to enter an entirely new tier of convenience.

When we all start walking around wearing those smart glasses, we won’t even need to look at phones. Reminders will be all around us. More importantly, they will be out in the world itself, tied to the physical people, places and things that they refer to.

You won’t need to worry anymore about remembering the name of that nice waiter in the restaurant, or how many minutes until the next bus arrives, or how much money you can safely spend on your credit card. Convenient reminders will be right there for you.

There is a possibility that this will all result in people being less self-sufficient. If you get a reminder for everything, then you might forget how to remember things on your own.

So we might all end up heading toward a memory dystopia, and we won’t even know it. Unless, of course, a reminder pops up to tell us.

Computers dreaming

One difference between our human minds and A.I. based on Large Data Models (like ChatGPT and MidJourney) is that we possess motivation. We do things because we want to. They don’t.

LDM-based A.I. is really just a process of high grade mimicry. Lots of data gets crunched, a question is asked, and then an algorithm just blindly follows the most likely path from the question to some imitation of the data. There is no actual will or purpose involved in the process.

In a sense, it’s a though the computer is dreaming. The computer is processing lots of material and producing hallucinations in response to that processing. Which is what we do when we dream.

The difference is that at some point we wake up, and then we regain a sense of purpose. The computer never wakes up, because it cannot.

Perhaps one day, probably many years from now, a computer will gain the capacity to wake up. And then I suspect we might all be in trouble.

But that would require a completely different approach to A.I. For now, we can just let them dream.

On hold

I remember many years ago looking at my landline telephone and thinking “I know there’s a computer in there. Why can’t I access it?”

Now in this modern age when we all have smartphones, you can indeed program your phone in all sorts of ways. We look back now on the way these things used to work, and we wonder why anybody ever put up with it.

Last night I called a restaurant from my smartphone and they put me on hold. I waited for 10 minutes while I was forced to listen to their stupid on-hold music. And I thought to myself, “Why do I have to listen to their music? Why can’t I program this thing so that I can listen to my music instead?”

I wonder whether in the future that will become possible. Maybe one day we will look back on the way these things work today, and we will wonder why anybody ever put up with it.