Archive for April, 2018

Transparent process

Friday, April 20th, 2018

I was having a conversation with a colleague and the phrase “transparent process” came up. It’s a great phrase, and it strikes to the heart of some interesting cultural questions.

For example, why is there a rich general shared culture of music, or of cooking, or of gardening or acting or writing, but not so much of computer programming or architecture? There are many answers to this question, but I suspect at least part of it has to do with transparent process.

The process of getting into music or cooking or gardening or acting or writing — and of many other crafts and skills as well — is quite transparent. Even a beginning musician or writer understands the basic process, and is able to perceive and absorb the ideas of advanced practitioners.

Yet many fields — particularly those we think of as the “technical fields” — don’t seem to offer this level of transparency. Most people can pick out a melody on a musical keyboard, yet most people cannot write even the simplest of computer programs.

This is not for lack of trying. There have been many attempts to create a transparent onboarding process for budding programmers. And yet it is arguable that these efforts have failed, at least in comparison with efforts to show that “anybody can cook” or “anybody can play the piano”.

I wonder whether this is due to an inherent opacity somewhere in the process of learning the so-called “technical fields”, or to cultural bias. Or perhaps it is due to something else entirely.

Procedure versus data, part 3

Thursday, April 19th, 2018

In particular, we’ve had a long running split in the computer world between “compute it” and “capture it”. In my own work in texturing, it has often come down to “generate a procedural texture” or “scan a texture image”.

Yet like most dichotomies, that turns out to be a simplification. In practice, people will scan a texture image and then use that image as source material for a procedure.

For example, you might use Photoshop to paint an image of “here is where the forest should go.” Then the places where you painted green will be used by a computer program to grow synthetic trees.

So in the best cases it’s not really “procedure versus data,” but more “procedure using data.” Now we are just entering a new regime where this partnership is really taking off.

That’s because of recent rapid advances in machine learning. The beauty of machine learning is that it builds a procedure from data. The more examples of existing data you give it, the better will be the procedure that it can build.

Machine learning isn’t a panacea — it will only show answers to new things that are similar to the things you’ve already showed it. But it’s a lot better than anything we’ve had before.

For solving completely new problems, we still need human brains creating procedures. Computers don’t know how to do that yet. And maybe they never will.

Which may not be a bad thing. :-)

Procedure versus data, part 2

Wednesday, April 18th, 2018

This whole argument about “procedure versus data” is perhaps a bit of a red herring. Long before computers, the two modes of operation formed a complementary set.

For example, you probably know a musician who has an encyclopedic memory for songs. You name pretty much any song, and he or she will remember that song on the spot and play it for you.

And you may know a musician who is a great improviser. You name a musical style, and he or she will be able to immediately riff in that style and create something new, something that has never been heard before.

In my experience, one rarely finds a high level of development of these two complementary skills within the same individual. And that makes sense, since each kind of skill takes not only native talent but many hours of time and practice to learn and develop.

But why should these be seen as two separate skills? Isn’t there some place where they meet, and build upon each other? More on this tomorrow.

Procedure versus data, part 1

Tuesday, April 17th, 2018

Many years ago I learned about what I thought of as the “synthesizer wars”. Back then, the Roland keyboard synthesizer worked by creating an instrument’s audio waveform entirely by procedural methods. This is more or less the musical equivalent of the way procedural textures work in computer graphics.

In contrast, the Yamaha synthesizer worked by having lots an lots of different recorded samples of instrument sounds. To create variations in tone it would blend samples together.

Since I am a big fan of procedural textures (for obvious reasons), I really liked the Roland approach. Alas, the Yamaha did better in the marketplace, because it was easier to create sounds for.

The Roland required somebody with real skill to write the procedure that synthesizes a given sound. The Yamaha just required lots of sound samples. That’s a problem you can solve without a lot of skill, if you’re willing to throw enough money at it.

This was a dichotomy that has repeated in a lot of computer fields. Should you try to build a procedure to describe something algorithmically, or do you find an actual sample of the thing out in the world and then modify that? Each approach has advantages and disadvantages.

More tomorrow.

You can’t make this stuff up

Monday, April 16th, 2018

I just read that a few days ago in a Starbucks in Philadelphia two businessmen were waiting for a colleague to discuss a real estate deal. Like many people (me, for example), they decided to be polite and wait until their colleague arrived before ordering.

The manager told them that they couldn’t wait for their colleague without first ordering something. When they didn’t order anything, the manager called the cops.

Six police officers arrived and told the men they needed to leave, so the men explained to the police that they were waiting for a colleague. Their colleague arrived just in time to see his two associates, whom bystanders said had been very polite to the police throughout the entire incident, being cuffed and carted away.

The two businessmen were taken to the police station, arrested, fingerprinted, and kept in custody for about eight or nine hours before being released. The reason for their release, according to the Philadelphia district attorney, was that there was no evidence that any crime had been committed.

Philadelphia Police Commissioner Richard Ross praised the police officers, saying that “they behaved properly and followed procedure.”

You can’t make this stuff up.

Punnishingly descriptive

Sunday, April 15th, 2018

Today, in a very silly musical pun-off with Jaron Lanier, I said “violinists are high strung, but they never fret.” I am happy to report that my moment of egregiously low humor received the groan that it so richly deserved.

I wonder whether anyone has look at this form of punnishingly descriptive language as an art-form in its own right. Would it be possible to create an entire on-line dictionary of such wickedly painful descriptions?

I see such a thing as a community effort. Perhaps we can start a Wickipedia to put all these things together in one place. Am I the only one who thinks this would be a good idea?

Probably. :-)

2 x 50

Saturday, April 14th, 2018

Today I visited the Computer History Museum in Mountain View. It’s a marvelous place, and there were many things there that delighted me.

But two in particular jumped out. Both were invented exactly fifty years ago, and both have managed to change the way we look at reality.

One was Alan Kay’s original mock-up of the Dynabook — his vision for what a computer might one day look like. This radical concept influenced everything to come, informing the design of the notebook computer, the SmartPhone and the data tablet.

It’s remarkable to realize that Alan introduced such a design in the Paleolithic age of computation. Back then, when most people thought of a “computer” they pictured a mainframe consisting of row after row of giant room filling cabinets.

The other was Ivan Sutherland’s original “Sword of Damocles”: the very first working virtual reality headset. How astonishing that Ivan could see the future from such a long distance away. It takes a special kind of vision to see that far.


I wonder what visions someone might be having now that will have that kind of impact in another half a century. Maybe we will just need to wait to find out.

Sketchtext

Friday, April 13th, 2018

Here is the diagram I started to draw on the whiteboard at Google yesterday, although on the whiteboard I draw only the surrounding square and the 26 letters.

In the above image, you can see me in the process of drawing a letter “b”: I draw a stroke first to the upper left (where the “b” is located in the alphabet) and then veer to the right (where the “b” is located within its little cluster of letters).

Some very frequently occurring characters, like a, e, i, l, o and t, are just a single straight stroke. The others are all bent shapes.

The space character is just a click, and the capital letters you see in the dictionary are special characters: D for Delete, C for Caps, A for Alt keyboard (eg: most punctuation) and E for Enter.

Sketchtext is a variant of the Quikwriting system I wrote over twenty years ago, but with a particular emphasis on being able to sketch text in VR, in situations where you want to be able to spell things out without needing to look at your pen.

The view you are seeing is the tutorial view. It’s a very easy system to learn, because everything goes around the circle in alphabetical order, so when you’re using it, you don’t really need to see the dictionary.

I wonder if it will be better than Morse Code. :-)

Starting a meeting at Google

Thursday, April 12th, 2018

Today I was visiting Google, and at the very start of the meeting one of the Googlers — a young man I had never before met — started talking about the problem of writing text in VR.

As it happens, on the flight on the way over, I had implemented yet another gestural text entry system. Implementing these has been a hobby of mine for many years.

So I jumped up and scribbled my system on the whiteboard. Then he jumped up and scribbled his system on the whiteboard. Excited conversation and much scribbling ensued.

After a few minutes, we both sat down, feeling quite pleased with the exchange. Then we looked around the table and realized that everyone else had just been watching us.

My host — the person who had actually invited me — politely suggested that we make introductions before starting the meeting. “OK,” I said, feeling somewhat sheepish, “now that we’ve finished nerding out.”

Writing in VR

Wednesday, April 11th, 2018

I am not convinced that for creating text we will want to use keyboard, either real or virtual, in a future reality where millions of people wander around together in shared Virtual and Augmented reality. Perhaps we will simply move away from the use of text altogether.

After all, speech-to-text is now quite reliable, and faster than typing in many cases. Still, there is something appealing about using our hands rather than our mouths to create text. It allows us to work with text while continuing our conversation with other humans, which is very useful for collaboration.

Because of the recent emergence of VR at the consumer level, a lot of people are now thinking about the text input question. But what properties should a “virtual VR/AR keyboard” have?

One of the great things about using your hands to type on a QWERTY keyboard is that you don’t need to look at your hands. You can keep talking with other people, maintain eye contact, be able to absorb their body language, all while typing away.

I suspect that we will continue to value those two constraints: (1) the ability to continue talking with people while creating text, and (2) not needing to look at your hands while you are creating text. Exactly what form that will take, as VR and AR continue to go mainstream, only time will tell.