Working backward

If you want to explain the benefits of electricity, you don’t start with the design of an electrical outlet, or the way electrical networks are organized. Instead, you might talk about the benefits of refrigeration, air conditioning or electric lights.

We have a similar obligation when talking about a future technology. We should not start with how it works, but rather with the impacts it will have.

This is generally not easy, since a technology can change lots of things, particularly when combined with other technologies.

For example, Uber and Lyft required both Smartphones and affordable geolocation. To make a proper prediction about those services, you would have needed to anticipate several different technologies.

But the principle remains: in order to properly talk about the impact of future technologies, you need to work backwards: first understand the potential impact, and only then move on to details about how the thing works.


Change is almost never bad
But losing old words can be sad
These days we say “fresh” or “rad”
For things we once called “groovy”

Yet for the new we must make room
And leave the old ways to their doom
The day will come we’ll think of Zoom
As like a silent movie

Dog gossip

Have you ever noticed that when people take a dog on a walk, the dog usually gets really interested in any poop it finds from other dogs? A dog will nearly always stop and sniff with great seriousness, as though the poop contains important information.

It’s as though there is a secret language, transmitted by smell, which only dogs know. A way for them to communicate which seems useless to anyone else, but is apparently of great importance to them.

So maybe smelling poop is how dogs gossip.

Or it could be the other way around. Maybe when we humans gossip, it’s our way of leaving poop for other people to find.


I wonder whether there will be a backlash from the right against the current temporary resident of the White House. This whole “sore loser” thing is doing the one thing that people never tolerate, whatever their political affiliation — making them look bad.

We might end up getting something healthy out of this — a conservative wing that is not in thrall to a wacky cult of narcissism. I think that’s a great thing, because any democracy needs a balance of power.

But when one side of that power balance loses its dignity, and is held hostage by a character straight out of a Loony Toons cartoon, things don’t go well for anybody. Maybe now things will be better.

Forbin 66

I’ve been bingeing on Endeavour, the great British TV prequel to the Inspector Morse stories. Last night, during Season 4, Episode 1, somebody made a reference to the programming language “Forbin 66”.

On one level, this was clearly a reference Fortran 66 — the first industry-standard version of the Fortran programming language. This fictional episode is taking place in 1968, and it would be reasonable for a computer of that time to be programmed in Fortran 66. But there never was a programming language called “Forbin 66”.

As it happens, the plot of the episode features a kind of “man versus machine” story — in particular, a chess playing computer that promises to dethrone the best current human grandmaster, who happened to be Russian (this was all taking place during the Cold War). But when I heard “Forbin 66”, I knew it was an Easter Egg pointing to another story from that era.

Colossus: The Forbin Project was a great 1970 SciFi movie (one of my favorites) that was also decidedly “man versus machine”. Charles Forbin is a genius who designs a secret computer defense system guaranteed to protect the U.S. from those pesky Russians.

But the Russians have built their own secret computer defense system. The two rival computer systems end up reaching out to each other and deciding they know how to run things better than humans do. Things do not turn out well for the humans.

This makes me wonder — how many other Easter Eggs do writers put into these TV shows just for fun? I suspect there might be an awful lot of them out there.

Virtual assistants

I wonder whether, after VR and AR really take off as part of everyday life, we will have virtual assistants. I have my doubts, because virtual assistants have failed spectacularly in the past.

Some of us remember Microsoft’s “Clippy”, introduced in Windows 97. Clippy was a cute little animated paper clip that would pop up from time to time to offer helpful advice. Most people found it annoying, and Microsoft removed the feature a few years later.

There are indeed invisible assistants like Apple’s Siri, Amazon’s Alexa and Samsung’s Bixby. But they are voice only — you never see them.

So I am somehow doubtful that visible assistants will gain a lot of traction in a forthcoming virtual and mixed reality world. Somehow I think the issue is not so much about technology, but rather about human nature.

Rules and tools

Rule based systems on computers have an interesting connection with the tools you might find in a machine shop. Superficially they look different, but they have a lot in common.

A router, lathe, planer, saw, grinder and milling machine each serves a particular purpose. And each is an evolution of years or even centuries of invention and refinement.

The same is true of a rule in a computer software system. Whether the rule is to align two objects, detect similarities or differences, enforce symmetries, or combine properties in new ways, each tool has been refined and made easy to use over many iterations.

More importantly, the individual tools are meant to work together with each other. You wouldn’t have a rule system that did nothing but align two objects, any more than you would have a machine shop that contained nothing but a milling machine. It is the ability to go back and forth between different tools that gives the system its power.

I suspect there is a semantics of tools — and sets of tools — that transcends the question of what problem domain you are working in.