Personal principles

I was in a meeting recently, at which we were making a decision which projects to fund. Each of the proposed projects, according to the strict definition of the call for proposals, was worthwhile.

One of those projects offended my ethical principles. I couldn’t in good conscience vote for it, so I didn’t. And so another project, which I did not find to be ethically objectionable, was funded instead.

But here’s the thing: I didn’t tell the other people in the room why I wasn’t voting for that project. I was certainly under no obligation to do so. Still, I could have.

But then they would have had the opportunity to object, to say they didn’t share my principles, and on that basis to come to the defense of the project in question. And so by telling them more than I needed to, I might have helped a project to be funded that I objected to on ethical principles.

I realize that everyone has their own personal principles. I may never agree with yours, and you may never agree with mine. We are all different.

So in that moment I was faced with my own ethical crisis: Should I attempt, within a few minutes, to influence a group of people to agree with my view of what is ethical, or should I instead assert my ethics directly on the world itself?

I chose the latter course, and I still don’t know whether it was the right decision. But on balance, I am glad about the outcome.

Heartsick

I am heartsick at the horrific murders in Brussels. It is hard to put together a coherent set of thoughts in the face of so much cruelty and contempt for the sanctity of human life.

I hope that the United States will have the sense this coming November, in the face of such terrible monstrosity, to elect a sane, level headed and competent grown-up as our next President.

Momentary Utopias

I received a phone call today from a colleague who is exploring the relationship between new technologies and ideas of Utopia. It was a wide-ranging and fun conversation.

The conversation had been prompted by my colleague’s interest in that immediate rush people felt when they tried out our Holojam system, and realized that they were able to enter a virtual world where they could draw in the air together. She said that this might be a feeling of encountering a kind of Utopia.

At some point I told her my view (which I mentioned in a blog post some years back) that you can’t live in the future for more than five minutes. In other words, we experience a feeling of awe and excitement when something is new, but that feeling goes away once we become used to the new way of things.

For example, we don’t stare in astonishment and wonder when the ceiling light goes on after we flip a light switch, even though the underlying technology of modern electrical power distribution is, in fact, pretty amazing. We don’t even stare in wonder when somebody stands on a street corner in NYC holding a conversation with a friend in California, even though mobile phone technology is even more amazing.

In short, my view is that you cannot actually live in techno-Utopia — you can only feel it during brief moments of technological transition. Utopia can never be a place you are living, but only a doorway you are walking through.

Some things never change

I am watching the Netflix series Halt and Catch Fire about computer entrepreneurs in the 1980s. I very much appreciate the fact that the heroes are mostly computer programmers or hardware hackers.

The technology is all absolutely spot-on. Every detail, no matter how arcane or nerdy, is completely correct and chronologically accurate. Clearly somebody on the writing or advisory team was actually there.

But what really intrigues me is that feeling of heady possibility, of creating an astonishing future that you know is just around the corner. It’s exactly what being in computer graphics felt like to me when I was just starting out.

And it’s exactly what it feels like now.

General knowledge

I participated yesterday in a workshop filled with extremely smart and exceptional people. In general the entire experience was wonderful and inspiring, and I learned a lot from everyone. But there was one odd moment.

One the talks, you see, involved a bit of back and forth. From time to time the speaker would show something on the screen and solicit a response from the room. At one point showed an image of the Mona Lisa sporting a mustache. Next to this he showed a photograph of a man’s face. He then asked “Who is the man in the photograph?”

I shouted out the obvious answer, expecting that a chorus of us would give the same answer: “Marcel Duchamp!” Yet in that entire room, only one other person spoke up. I realized then that nobody else knew about Duchamp’s iconic work L.H.O.O.Q.. Either that, or they had suddenly all become strangely shy.

I’m certainly no art historian, and my knowledge of 20th Century art has huge gaps. But it seems to me that some things, like iconic works by pioneering artists, should be part of the general knowledge base of our populace. Yet clearly it is not, which tells me that something is screwy with the way education works in this country.

OK, maybe this isn’t the most important problem with our education system. After all, our high schools also manage to carefully avoid teaching mathematics, or even letting kids know how amazingly creative and fun math is. Instead they mostly teach a sequence of rote exercises and formulae that they mislabel as “mathematics”. Believe it or not, in most parts of this country you can get all the way through high school without ever learning the beauty of Euclid’s proof of the infinity of prime numbers.

So maybe in a way our education system is indeed teaching absurdism to our children. Except instead of painting a silly mustache on Leonardo da Vinci paintings, they are painting a silly mustache on rational thought itself. I wonder if many kids get the joke.

Silly Putty and a knife

I saw a wonderful talk today about machine learning. Most of the time when people talk about machine learning they deal in abstractions. They write down some math, they wave their hands, they mutter vaguely about neural networks, and in general they say things that are completely mysterious to most of the populace.

But the talk today, by Chris Olah, was anything but mysterious. He pretty much laid it out for us, in terms that anybody could understand.

Essentially, machine learning algorithms are like Silly Putty. They take the space of all of the variables that go into whatever an algorithm is trying to recognize, and they stretch and distort that space in all sorts of interesting ways.

After all that distortion, whatever it is the algorithm is supposed to recognize ends up on one side of some plane, and everything else ends up on the other side. For example, if the machine learning algorithm is trying to recognize pictures with dogs in them, then after all the Silly Putty distortion, all the pictures containing dogs will end up on one side of the plane, and all of the pictures without dogs will end up on the other side.

Then it’s just a matter of using a mathematical knife to cut through that plane. On one side will be all the dog pictures, on the other side will be the non-dog pictures.

And that my friends, in a nutshell, is what machine learning is all about. I had no idea, until today, that Silly Putty could be so useful.

Event for Marvin

Today I went to a large event at the MIT Media Lab in honor of Marvin Minsky. This was very different from the much smaller event I went to shortly after he passed away, which was just for family and a few friends.

Many wonderful things were said, and I took notes. Looking at those notes now, one if my favorites is from Pat Winston, who summed up Marvin’s contribution to A.I. like this: “Alan Turing told us we could make computers intelligent, and Marvin Minsky told us how to do it.”

Another is from Brian Silverman. He was explaining how when Marvin was working with Seymour Papert on developing programming languages for kids in the early 1970s, and there was no computer that could do what Seymour needed, Marvin just designed a new kind of computer and built it himself.

I particularly like the way Brian said it: “Research required a particular thing. If that thing didn’t exist, Marvin just invented it.”

They also passed out fortune cookies at the event, each with a quote from Marvin. At the end of the evening I saw somebody carrying out a large bag of left-over fortune cookies. “Careful,” I told her, “if you eat too many of those at once, everything will start to make sense.”

Future non-verbal communication

Humans are very good at picking up on subtle non-verbal cues. We can generally tell when somebody is nervous, or excited, or joyful or confused, without needing to hear a single word.

There is, reasonably enough, much worry about whether these sorts of important interpersonal cues will be preserved when people are having extended conversations in a shared virtual world. But I am not worried. In fact, quite the opposite.

When you and I speak on the phone, we don’t feel that our inability to see each other visually destroys our ability to communicate. Instead, we both understand quite well that the only channel we have is voice, so we pay more attention to vocal cues. When communicating with each other, people are very good at sussing out where the good quality information is, and focusing their attention accordingly.

I think something similar will happen for face to face communication in virtual worlds. At first, the body cues will be a strict subset of those in real life. We will be able to see each others’ head movements, and then perhaps hand movements, but it will take a little while longer to transmit all of the subtleties of full body motion.

At every step of this evolution, we will instinctively know where the quality information is coming from, because that’s what people are good at. Once we are used to any particular mode of future face to face commmunication, we won’t think of it as odd, or off-putting, any more than we currently think that way about talking to each other on the phone.

But what if it’s even better than that? What if it turns out that our brains are more evolved and capable at supporting face to face communication than our bodies are?

If that is so, then there might come a point when computationally enhanced body language actually lets us convey and apprehend subtle cues of body language that are not possible in physical reality. When that happens, we might find that body language and facial expression, suitably enhanced by computer intermediation, will allow us to communicate with each other more effectively than was ever before possible in the history of humanity.

After a generation or two of living with such advanced support for non-verbal communication, people might wonder how the human race ever got along without it.

Meeting on an alien planet

Today at our lab we held part of our weekly production meeting in VR. We were all in the same physical room, and we could have seen each other in person, but we opted to put on wireless headsets, using our Holojam technology, and hold our conversation “in world”.

Because we could move around the room freely, it felt as though we had all been transported together to that alternate world. We could continue to talk with each other, but while inhabiting alien bodies on another planet.

This was just an early experiment. We still haven’t added enough things to do on that planet to make it a place we would prefer to spend a lot of time hanging out together. But we are going to keep adding. Every week, our alternate meeting room on a friendly alien world is going to become an ever more interesting place to hang out.

Still, to quote the last line of a great movie, there’s no place like home.

Happy Ides of March! 🙂