Getting to know your robot

I’ve been giving a lot of thought to the “programming without math” question, and my views have been shifting since my earlier posts on the subject. I think now that I underestimated the significance of the comment by Andras:

“As computing is drastically transforming our society, I think great minds need to look at transforming ‘programming’ to be more engageable and useful to a wider audience.”

The blocks world with snap-together tiles that I was playing with earlier is really quite close to the standard “procedural” paradigm for programming, in which you need to explicitly tell the computer what to do, and in what order. I now think the hurdle for that is still too high, because it fails my fifth of vodka test. In a nutshell: for 90% of the population to embrace something, it has to still be fun to use after you’ve imbibed a fifth of vodka. That’s true of the Apple iPhone and most popular TV shows, but not of any existing programming language — which tend to be notoriously intolerant of errors.

It would be great to have the sort of interface that science fiction writers fantasize about when they create imaginary robots. George Jetson doesn’t need to type in code to get Rosie the Robot to know what he’s talking about, and Luke Skywalker doesn’t require some long-ago-and-far-away Jedi version of Java to give instructions to R2D2. In both cases, they talk to their robots. This may be a fantasy, but it also might contain seeds of a necessary truth.

For certainly anything that will be used by most people will need to be very error tolerant. We need to give people an environment for talking to robots that allows users to make mistakes, and yet still more or less works. People are quite good at learning to find their way through fuzzy systems that respond with some level of consistency (that is in fact a high level description of every toddler’s experience of the world).

And that means that there will need to be a strong element in the system of what programmers call “declarative programming”. You, the user, are allowed to give general rules for what you think your robot should do, and those rules don’t need to be arranged in a rigid order. This is more in line with the way people usually think. If you say “I like my songs arranged with the sad songs first,” then your robot should generally know to put the sad songs first on your song list. You’re not giving it explicit instructions how to do this. Rather, you’re giving it a general rule to influence its behavior.

Generally this means that there will be some kind of software running inside the robot that does “constraint solving” — given constraining rules to work with, the robot comes up with solutions that fit those constraints. There is already an entire subfield of computer science concerned with declarative, constraint based programming, but the available languages, such as CLIPS, Soar and Prolog, are considered tools for A.I. researchers, and generally require an expert user.

While we’re on the subject of A.I., it is important to reiterate that computers are not people. As Ben Shneiderman is fond of pointing out, a computer is closer to a pencil than it is to a person. Our robots might one day develop the sort of “reasoning” process that we associate with humans (many brilliant people have been valiantly trying to climb that mountain for decades now) but there is no guarantee this goal will ever be achieved, and certainly no assurance it will happen in our lifetimes.

Even the software “robots” that the folks at Google incorporate into their Wave project (software agents that lurk behind the scenes to interactively modify and update your screen widgets), are very literal minded, and are generally programmed the old-fashioned way, through a procedural AppBuilder language that is essentially a gloss on such “expert” languages as Java.

In order to create robots that are accessible enough that most people can explain things to them, I think we will need to go back to some of the ideas I discussed two years ago when talking about Theory of Mind, and the great work in this area by Lisa Zunshine and others (about which there was a lovely article the other day in The New York Times). In other words, we will need to develop a Theory of Mind about what robots can and can’t do.

So this is going to be a two way street. Yes, we need to make future robots more accessible to the 95% of the population that is now left out, by adding natural language interfaces that allow people to talk to their robots declaratively (ie: “I like my songs arranged with the sad songs first”). But we will also need to gradually teach people a Theory of Mind about robots, so that we humans properly understand the peculiar nature of this strange new species that we will be learning how to talk to in the years to come.

3 thoughts on “Getting to know your robot”

  1. This brings up all kinds of issues, but I’ll boil them down to two of them: 1) Whether the strong AI envisioned here is possible; and 2) whether it would be good.

    To say that you like sad songs first and expect that a player would start doing that seems to me like a pretty bad idea. Maybe I like sad songs first when I’m sad, or only when I’m happy. Or maybe it was a joke. Or maybe that is true in the context I said it, but not other contexts. The reality is that there are a huge number of things I say that I do *not* want the world to respond to by changing the way it does things.

    Part of the magic of computer programming is that the programmer is in charge. Making autonomous programs that do what they want, possibly informed by a person is definitely not the kind of programming I want to do.

    Even “put that there” is pretty hard for a human to do (um, do you mean put it next to the phone or the bulletin board???)

    If you really want a computer to do something for you, don’t you want to be able to have pretty high confidence in what it is going to do? There is some other kind of “agent” based interaction with computers which might be interesting in some contexts – but it isn’t programming. It is interacting.

  2. Yes, Ben, I agree that it isn’t programming. So we shouldn’t call it programming. Your description of the various contexts that could vary the intent of “put sad songs first” is an example of a situation where rule-based systems tend to do well.

    Assuming we can’t get everyone using ALGOL-like languages (which I suspect is the case), it would still be good to empower people to be able to do more sophisticated types of interactions with computers. My hypothesis is that we can use a well-chosen subset of natural language as a front end to a constraint system, in the context of an interactive simulation. It’s still an untested hypothesis, but I plan to test it. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *