I was having dinner this evening with my cousin. He and I are nearly the same age, and we have always been close.

At one point he pointed out that many of the people whom he and I revered when we were little kids are now dying off. I imagine he might have been thinking about Raquel Welch, but I didn’t ask.

And that got me thinking about all of those amazing and talented people we had first encountered when they were still young adults, bursting with promise. But time is what it is, and time does what it does.

Of course, once I started thinking about time, another thought occurred to me, a somewhat darker thought. But I didn’t mention it. Instead I changed the subject to movies — a very safe topic, because movies are timeless.

What you can’t explain to the future

We are living in yet another time before a fundamental shift in how ordinary people interact with computers. The last time that happened was 2007, when the iPhone came out. Another was 1993, when lots of people first started using the Web.

In another decade or two, everybody will be used to simply telling their computer what they want to get done, and the computer will do it. You’ll be able to tell your computer to create an original song that blends together The Beatles’ “Yesterday” and Ed Sheeran’s “Thinking Out Loud”. And that will be a song that you created.

You will be able to tell your computer to write a play based on some new ideas that you’ve dreamt up, and then to conjure up a performance of that play, complete with blocking and appropriate props and lighting. And that will be a play and a performance that you created.

Young people in that future time will understand intellectually what it was like back in the old days, but they won’t really have the emotional sense of it. It will seem amazing to them that we needed to struggle through so much toil to create a new work of art, rather than simply using modern tools.

And try as we might, we won’t really be able to explain it to them.


This week I created a one-page summary of a research proposal to share with my collaborators, so they could share it with industry partners. After I managed to get all the words right, with very helpful suggestions from my colleagues, I thought I was done.

But then I went to print it, and realized that I had issues with the line spacing, margins, and general visual balance. The words were right, but the whole thing didn’t look exactly right.

I ended up spending a lot more time this morning iterating on the format of that one page until it looked perfect. Which in retrospect seems like time very well spent.

Ok with that

ChatGPT and similar Large Language Model engines are not actually intelligent. They are mindless drones that do data-driven pattern matching. Not that there’s anything wrong with that.

Today a colleague told me of a long term research plan. His group sins to create AI engines that would be much closer to what we think of as intelligent. They would learn and self correct, and have the ability to develop over time, somewhat analogously to how the mind of a child develops over time.

I was very impressed. I told my colleague, “You know, this may go down in history as the moment when Frankenstein became a documentary.”

I can’t be sure, but I got the feeling that he’s ok with that.

Except you

During a visit today to my doctor, we got to talking about the Super Bowl. Turns out he is a really big Chiefs fan.

He told me that two days before the big game, he’d told a friend over the phone “I predict the Chiefs to win by three points.” It was, he said, a missed opportunity.

“Do you mean,” I asked, “that you thought you should bet on the game?”

“Well yes, except that I never gamble.”

“Probably just as well,” I said. “You know how it goes. If you had actually bet on the Chiefs, they wouldn’t have won. That’s how the Universe works.”

“Yeah,” he said, “I agree. That is indeed how the Universe works.”

So I said “Just think. If you had gone through with that bet, the Chiefs would probably have lost, and many people would have been sad. And it would have been your fault.”

“Yes,” he agreed, “it would have been my fault.”

“But look at the bright side,” I added, “nobody would ever have realized it was your fault.”

“Except you.”

Talking to my phone

Today was n the middle of a Zoom meeting, and I needed to email a note to somebody about a follow-up meeting. Instead of typing the email into my computer, I picked up my phone and I dictated the email.

The other people on the Zoom meeting thought it was kind of amusing, just watching me talk to my phone like that. I had forgotten that not all of my colleagues do that.

One colleague pointed out that with her Israeli accent, it probably wouldn’t work on her iPhone. Which is rather unfair when you think about it.

People have told me that when I dictate into the phone, my tone of voice changes entirely. Instead of speaking in a friendly way, it’s as though I’m giving orders to an underling.

What is really happening is that I have gradually learned what manner of speaking will produce the fewest errors on the phone. But it is kind of interesting that this replicates an unequal working relationship.

Which makes sense, because people and computers are not equal. Let’s just hope that the inequality continues to be in the same direction.

Personal style

This first generation of Chatbots usually writes in a very generic way, unless told to do otherwise. By default, results are grammatically correct, sentences are well balanced, and paragraphs do a good job of introducing and structuring ideas.

But Chatbots usually don’t sound like any particular person. I suspect that this is deliberate, based on their intended use. If you are going to use a Chatbot for information retrieval, maybe you don’t want it to sound like your Aunt Edna from Brooklyn.

But the uses of Chatbots will inevitably expand, and sooner or later it may become more common to have them imitate personal styles of speech. The result might not be as grammatically correct, but it will seem a lot more human.

The question is not whether this is possible (it certainly is), but how it will be received. It is not clear how people will respond to being spoken to on a regular basis by computers that imitate the real people they know.

Will it be the next Google? Or will it be the next Clippy?

Virtual representation

One measure of the gradual advance of media technology is how we think about the phrase “virtual representation”. There was a time when a book was thought of as a virtual representation of reality — fictional or otherwise.

Then we stopped thinking about books so much, and we started thinking about photographs as virtual representations. Eventually photos became too familiar to think about too much, and we started to think about movies and then television as virtual representations.

A while back, long after we stopped thinking of movies or TV as novelties, we started applying the idea of “virtual representation” to computer simulations of reality — fictional or otherwise. But now we are about to move on to the next phase.

The resynthesis of words or images performed by ChatGPT or Bard or MidJourney or DALL-E is the current focus. “But wait,” people think, encountering those programs for the first time. “They are just a kind of virtual representation, not the real thing.”

Which you could also say about books.

Future bar charts

Nearly every time I see a Powerpoint presentation, it is filled with bar charts. Bar charts seem to be the lingua franca of conveying information to a group in presentation form.

Clearly bar charts appealing to both presenter and audience. They convey essential information in a bold graphical form, they show trends in a way that is easy for the eye to follow, and they abstract away details the neither presenter nor audience cares about in the moment.

There will come a time when the visual equivalent of ChatGPT or Bard will make bar charts for us on the spot. We will be discussing some topic, and one person will say “You know, there’s been an increase in … over the last decade,” or “The number of women enrolling at MIT recently, compared with the number of women at NYU, has been …”

At that moment, a bar chart can optionally pop up in the view of both people, presumably mediated by our smart glasses. We probably won’t even think much about it, other than to wonder how anybody ever explained anybody before we all had conversational bar charts.