Plugged in at the beach

I distinctly remember the first time I saw someone with a cellphone at the beach. It was 1992 on a lovely beach in northern Brazil.

A man was lying peacefully on the sand, eyes closed, sunning himself, his cellphone right beside him. Which was unusual, because in that year cellphones were unusual, at least in the U.S. If you had one back then, it was because you needed it for work.

I remember thinking that this could be terrible or it could be great. Terrible because the man clearly was unable to get away from whatever work and responsibilities were glueing him to that phone. Great because he was able to be at the beach even while he was working.

Today I was looking at my Quest Pro, and thinking that in 5 or 10 years some more advanced version of this will have the form factor of sunglasses. And then, in some form, I will probably see a repeat of exactly the same scene.

A person will be lying on the beach with what look like sunglasses, but which I will know to be functioning smart glasses. And I will be left thinking that this could be terrible or it could be great.

Future sound track

I wonder whether AI will advance to the point where we can each have our own personal soundtrack. Wherever you are, the computer will figure out the right mood to fit your current situation.

It might even compose something original, based on your tastes. Maybe it would be a new way to create music.

Last day of Siggraph

There was a lot of excitement at Siggraph this week, but a surprising lack of vision into the future. I guess that makes sense for a technical conference.

For the most part, people were focused on the next thing, whatever that may be. So they weren’t generally thinking about what computer graphics might be like in another ten years.

Well, ten years from now computers will be a hundred times faster than they are now. So things are indeed going to be qualitatively different.

Computer graphic imagery (CGI) will be completely integrated into our everyday life, as wearables become cheap and ubiquitous. The real and the virtual will be seamlessly intermixed to the point where the distinction between the two will start to become meaningless.

CGI will be continually created by generative AI in response to our casual conversation and gesture. And we won’t even think about it, because it will all just be normal.

But at Siggraph this year, they weren’t really talking about any of that.

Fourth Day of Siggraph

As I was saying, one experience yesterday morning at Siggraph jumped out at me. At the nVidia booth, they were showing how they can use machine learning to turn a single photograph into a 3D model. The interesting part was that the process takes three seconds.

My first thought, after all this week’s talk about Moore’s Law, is that in another ten years this process will take one hundredth as long. This is because every ten years, computers become 100 times faster.

In other words, by 2033, we will be able to turn a single photograph into a 3D model in real time. At that point it will seem instantaneous.

Like Turner Whitted said, certain effects go beyond the quantitative, and become qualitative. When we can create 3D worlds instantaneously from single images, that will be a qualitative change in our ability to manipulate reality.

Third day of Siggraph

This was the 50th anniversary of the conference. And therefore there was a lot of talk about Moore’s Law.

Roughly speaking, every 10 years computers get about 100 times faster. Which means that after 50 years, computers have gotten about 100 trillion times faster.

Turner Whitted made the observation in a session this week that we shouldn’t be focusing on the quantitative effect of Moore’s Law. Instead, we should be focusing on the qualitative effect.

Just morning I had an experience that reaffirms this. More tomorrow.

Second day of Siggraph

Second day of the conference, and — not surprisingly — far too many things to be able to wrap my mind around it all.

But one thing is clear: The people who have been coming here the longest have the best insights about the future. I suspect that this is at least partly because it takes a few decades or more of living with Moore’s Law to be able to use it predictively.

Drunken sensors

Today I read a fascinating article. It seems that the standard process for creating nanosensors, which can electronically detect the presence of many different types of materials, involves creating connections between extremely thin layers of silicon together.

This is done by “curing” them — subjecting them to very high temperatures for 12 hours. Which is an extremely expensive process.

Apparently, while cleaning one of these sensors prior to curing, a researcher accidentally spilled a little ethanol — that’s regular drinking alcohol — on one of them. And the sensor started performing better than any of the cured sensors.

The research team then figured out that adding just the right amount of ethanol to uncured sensors– not too much, not too little — resulted in sensors which were more effective than the cured sensors method. And also a lot cheaper to make.

So apparently if you get these sensors a little drunk — but not too drunk — then they become very sensitive.

Which is a principle that Chinese poets knew centuries ago.