Collaborating with yourself

After enough time has gone by, reading your own software takes on an interesting aspect. You are rediscovering things that you created years ago, but now you are looking with fresh eyes.

One odd thing about the process is that even though the person who wrote the code is yourself, there is a part of you that feels separate from them. After all, you are thinking about very different things now, whereas they are completely immersed in the thing they are creating.

So it can feel like a sort of collaboration with yourself. In essence you are doing a code review of somebody’s computer program, except the person whose code you are reviewing is you.

Finding old code

Every once in a while I find some really cool code that I had written years ago. It’s usually something that implements a capability in computer graphics or animation.

When that happens, I have a shiny new toy to play with. Because in all of the time since I had written the original code, I’ve had years to think about questions like “What would I do with something like that?”

So when the code shows up, I usually put it right to work solving problems that I hadn’t even thought of yet when the code was new. And that makes me very happy.

What Zoom is better at

When meetings can’t be face to face
We are stuck using Zoom in their place
    For things academic
    This awful pandemic
Can feel like an obstacle race

So while in our homes we are sittin’
There is a good rule (though unwritten)
    You can help break the ice
    And make things more nice
If you show your new puppy or kitten

Editing an email after sending it

Using most current email client software, I can easily send you an email. But I can’t change it once it has been sent.

Alternatively, I can choose to do things in a much more formal way, sending you a link to a Web page or a Google doc. In that case I can continue to modify my message after the fact.

To me it feels as though we are being forced to one of two extremes, when there really should be something in between. Email clients should provide an easy way of creating a casual modifiable document just for the purposes of our email communication.

You and I would both understand that I am sending you a message that contains evolving content. You can then choose to reply with your own message which contains evolving content.

But we would not need to go to the formal step of declaring “here is a document on the Web.” Instead, we would just understand that our respective messages continue to be editable — or perhaps contain designated sections that remain editable.

Is that asking too much?

Fonts with personality

People have distinct personalities. So do text fonts. I wonder whether we could combine together these two concepts.

When we look at certain text fonts, we usually agree that they look happy, or elegant, or frivolous, or foreboding,or any number of qualities that we usually associate with people. We could probably run a supervised machine learning algorithm that would train a font generator from labeled examples.

We could then, for any well described human personality, develop the perfectly descriptive font.

This might be a very useful thing to have. For example, in online text chats, we could immediately know what kind of person we are dealing with.

We could apply the same principle to written stories with narrative voices. As each person talks, the text font of their speech can be made to reflect deep aspects of their personality.

This was actually done a long time ago in the world of comic books. My uncle Abe Kanegson was a letterer for Will Eisner’s The Spirit series of comics back in the day. He would invent new fonts to match particular scenes, situations and personalities.

And now, all these years later, we can finally do that with computers!


Today I received an email from a company from which I had previously ordered stuff on-line. I got a kick out of the engaging subject line of the email, reproduced here verbatim:

Hey { first_name|title|default:’there’ }}, ready to reorder?

There is something vaguely sweet and clueless about this: A company tries, yet spectacularly fails, to provide a human touch.

They attempted to reach out to me in a personal way, but relied entirely on buggy software to do the deed. So I ended up getting this weird garbled message from an unambiguously non-human bot.

You can cut the irony with a knife.

And it makes me wonder. How many so called “human interactions” with vendors involve no human at all in the loop?

How far along are we toward a dystopian future that many of us fear: A world in which all too many of our so called “personal” interactions do not involve any other human being at all?

Big Bang binge books

I have been bingeing The Big Bang Theory, planning on going through all 11 seasons, every episode in order. It is great fun!

There is so much real physics in this show that it occurs to me it would be a great launching point for an educational opportunity. Somebody should build a series of science books around this show, with each new theory mentioned in the series being an entry point for a real introduction to the actual underlying science.

It seems to me that with such a friendly introduction, many people might be interested in learning more, especially if the books were well written and made very accessible. I wonder whether somebody has already done something like that.

If not, maybe somebody should.

Asymmetric interfaces

Just because you can be in VR doesn’t mean you should be in VR. As our lab is getting serious about working together in virtual worlds, we are appreciating the power of asymmetric interfaces.

Some things work incredibly well in virtual reality. For example, it’s far easier to walk around a 3D environment while selecting and gathering things if you are completely immersed in a virtual world.

But some things are simply easier with an old fashioned screen and keyboard. Typing, for example. Why try to reimagine typing text when there is already an incredibly efficient way of doing it?

So we are starting to think in terms of team members who are seeing and interacting with the same virtual world through very different lenses. Some might be wearing VR glasses, others might be sitting at a laptop computer and typing away. Still others might be walking around holding an iPad and using multitouch gestures to make things happen.

The whole paradigm of “one size fits all” is inherently broken. After all, nobody is arguing which is better — an airplane, a bus, a car, a bicycle or walking with your own two feet. Each is best for some transportation tasks, and really bad for others.

Let’s embrace socially shared VR, but also embrace diversity. The future belongs to asymmetric interfaces.

Meetings in real and virtual space

In a few months we may be emerging from this pandemic, and people will be able to go back to meeting in person. But I seriously doubt that things will go back to exactly the way they were.

We now know that some things that work better on-line. Certain kinds of collaboration and information sharing are best when people are meeting over Zoom, or in VR, or possible in platforms that are still in development.

So I suspect that in the post-COVID world we will end up with a different mix of real and virtual. It won’t be the same as what we have been forced to go through this past year, but it won’t be exactly the same as what we had before that.

Just as people have learned to mix real life with the Web and with SmartPhones, I suspect we will end up mixing real life with new kinds of virtual meeting spaces. Whatever the mix might be, I hope it ends up bringing people closer together in the ways that really matter.

Learning and VR

Since yesterday’s post I have been trying to imagine in my head how I would build a software tool/experience for learning the periodic table of the elements in virtual reality. It’s not so much that I want that particular tool, but rather I am trying to figure out whether it would confer unique advantages to the learning process.

It would be wonderful if it should turn out that a well designed learning tool in immersive VR is fundamentally better than what we have had until now. I am thinking that this particular learning task is right on the border between sufficiently challenging and sufficiently contained that we could do proper controlled experiments to figure out how well it works.

In a future post I might come back with some ideas about what such a learning experience would be like, and how we might instrument it so that we can properly assess its effectiveness relative to other approaches.