Private AR and ethics, continued

In our discussion about what would constitute “reasonable” privacy in future augmented reality, my colleagues and I settled on some core principles. To review, the question on the table was the following: Suppose you allow yourself to see a synthetic overlay through your augmented reality glasses which is socially unacceptable to others. Under what circumstances is that ok?

The conclusion we reached was that it all comes down to whether you’ve taken reasonable steps to lock down your private info. An example of not taking such steps would be if you just left your AR overlay content lying around on the table, just waiting for somebody to slip on your AR glasses.

The key here is the word “reasonable”. For example, if you lock the door to your apartment, and somebody still breaks in, the law generally acknowledges that you were not negligent. The “bad actor” who broke into your apartment was violating well established norms — both cultural and legal.

The same principle will apply to the question of “What can I see privately in my AR glasses?” If you’ve taken reasonable steps to protect your privacy, then it is not your fault if somebody else violates the law.

Private AR and ethics

I think most people would agree that hate speech is problematic. The vandal who spray paints a swastika on a synagogue door is engaging the entire community in an ethical challenge.

These days, most communities would respond to such a challenge in a very negative way. Such an act would be labeled hate speech, and there would be consequences for the perpetrator.

But what about “speech” that is intended for nobody but oneself? Suppose, for example, you enhance your (slightly in the future) augmented reality glasses to draw a virtual swastika on the front door of every synagogue you can see — an intervention intended for your eyes only.

Have you committed an ethical violation of community standards? Have you, in fact, even engaged your community in any way?

Today I had a rousing debate with some colleagues about these very questions. We didn’t come up with any simple solutions, but we did work out some basic principles.

More tomorrow.

Transparent process

I was having a conversation with a colleague and the phrase “transparent process” came up. It’s a great phrase, and it strikes to the heart of some interesting cultural questions.

For example, why is there a rich general shared culture of music, or of cooking, or of gardening or acting or writing, but not so much of computer programming or architecture? There are many answers to this question, but I suspect at least part of it has to do with transparent process.

The process of getting into music or cooking or gardening or acting or writing — and of many other crafts and skills as well — is quite transparent. Even a beginning musician or writer understands the basic process, and is able to perceive and absorb the ideas of advanced practitioners.

Yet many fields — particularly those we think of as the “technical fields” — don’t seem to offer this level of transparency. Most people can pick out a melody on a musical keyboard, yet most people cannot write even the simplest of computer programs.

This is not for lack of trying. There have been many attempts to create a transparent onboarding process for budding programmers. And yet it is arguable that these efforts have failed, at least in comparison with efforts to show that “anybody can cook” or “anybody can play the piano”.

I wonder whether this is due to an inherent opacity somewhere in the process of learning the so-called “technical fields”, or to cultural bias. Or perhaps it is due to something else entirely.

Procedure versus data, part 3

In particular, we’ve had a long running split in the computer world between “compute it” and “capture it”. In my own work in texturing, it has often come down to “generate a procedural texture” or “scan a texture image”.

Yet like most dichotomies, that turns out to be a simplification. In practice, people will scan a texture image and then use that image as source material for a procedure.

For example, you might use Photoshop to paint an image of “here is where the forest should go.” Then the places where you painted green will be used by a computer program to grow synthetic trees.

So in the best cases it’s not really “procedure versus data,” but more “procedure using data.” Now we are just entering a new regime where this partnership is really taking off.

That’s because of recent rapid advances in machine learning. The beauty of machine learning is that it builds a procedure from data. The more examples of existing data you give it, the better will be the procedure that it can build.

Machine learning isn’t a panacea — it will only show answers to new things that are similar to the things you’ve already showed it. But it’s a lot better than anything we’ve had before.

For solving completely new problems, we still need human brains creating procedures. Computers don’t know how to do that yet. And maybe they never will.

Which may not be a bad thing. 🙂

Procedure versus data, part 2

This whole argument about “procedure versus data” is perhaps a bit of a red herring. Long before computers, the two modes of operation formed a complementary set.

For example, you probably know a musician who has an encyclopedic memory for songs. You name pretty much any song, and he or she will remember that song on the spot and play it for you.

And you may know a musician who is a great improviser. You name a musical style, and he or she will be able to immediately riff in that style and create something new, something that has never been heard before.

In my experience, one rarely finds a high level of development of these two complementary skills within the same individual. And that makes sense, since each kind of skill takes not only native talent but many hours of time and practice to learn and develop.

But why should these be seen as two separate skills? Isn’t there some place where they meet, and build upon each other? More on this tomorrow.

Procedure versus data, part 1

Many years ago I learned about what I thought of as the “synthesizer wars”. Back then, the Roland keyboard synthesizer worked by creating an instrument’s audio waveform entirely by procedural methods. This is more or less the musical equivalent of the way procedural textures work in computer graphics.

In contrast, the Yamaha synthesizer worked by having lots an lots of different recorded samples of instrument sounds. To create variations in tone it would blend samples together.

Since I am a big fan of procedural textures (for obvious reasons), I really liked the Roland approach. Alas, the Yamaha did better in the marketplace, because it was easier to create sounds for.

The Roland required somebody with real skill to write the procedure that synthesizes a given sound. The Yamaha just required lots of sound samples. That’s a problem you can solve without a lot of skill, if you’re willing to throw enough money at it.

This was a dichotomy that has repeated in a lot of computer fields. Should you try to build a procedure to describe something algorithmically, or do you find an actual sample of the thing out in the world and then modify that? Each approach has advantages and disadvantages.

More tomorrow.

You can’t make this stuff up

I just read that a few days ago in a Starbucks in Philadelphia two businessmen were waiting for a colleague to discuss a real estate deal. Like many people (me, for example), they decided to be polite and wait until their colleague arrived before ordering.

The manager told them that they couldn’t wait for their colleague without first ordering something. When they didn’t order anything, the manager called the cops.

Six police officers arrived and told the men they needed to leave, so the men explained to the police that they were waiting for a colleague. Their colleague arrived just in time to see his two associates, whom bystanders said had been very polite to the police throughout the entire incident, being cuffed and carted away.

The two businessmen were taken to the police station, arrested, fingerprinted, and kept in custody for about eight or nine hours before being released. The reason for their release, according to the Philadelphia district attorney, was that there was no evidence that any crime had been committed.

Philadelphia Police Commissioner Richard Ross praised the police officers, saying that “they behaved properly and followed procedure.”

You can’t make this stuff up.

Punnishingly descriptive

Today, in a very silly musical pun-off with Jaron Lanier, I said “violinists are high strung, but they never fret.” I am happy to report that my moment of egregiously low humor received the groan that it so richly deserved.

I wonder whether anyone has look at this form of punnishingly descriptive language as an art-form in its own right. Would it be possible to create an entire on-line dictionary of such wickedly painful descriptions?

I see such a thing as a community effort. Perhaps we can start a Wickipedia to put all these things together in one place. Am I the only one who thinks this would be a good idea?

Probably. 🙂

2 x 50

Today I visited the Computer History Museum in Mountain View. It’s a marvelous place, and there were many things there that delighted me.

But two in particular jumped out. Both were invented exactly fifty years ago, and both have managed to change the way we look at reality.

One was Alan Kay’s original mock-up of the Dynabook — his vision for what a computer might one day look like. This radical concept influenced everything to come, informing the design of the notebook computer, the SmartPhone and the data tablet.

It’s remarkable to realize that Alan introduced such a design in the Paleolithic age of computation. Back then, when most people thought of a “computer” they pictured a mainframe consisting of row after row of giant room filling cabinets.

The other was Ivan Sutherland’s original “Sword of Damocles”: the very first working virtual reality headset. How astonishing that Ivan could see the future from such a long distance away. It takes a special kind of vision to see that far.

I wonder what visions someone might be having now that will have that kind of impact in another half a century. Maybe we will just need to wait to find out.