I’ve been giving a lot of thought to the “programming without math” question, and my views have been shifting since my earlier posts on the subject. I think now that I underestimated the significance of the comment by Andras:
“As computing is drastically transforming our society, I think great minds need to look at transforming ‘programming’ to be more engageable and useful to a wider audience.”
The blocks world with snap-together tiles that I was playing with earlier is really quite close to the standard “procedural” paradigm for programming, in which you need to explicitly tell the computer what to do, and in what order. I now think the hurdle for that is still too high, because it fails my fifth of vodka test. In a nutshell: for 90% of the population to embrace something, it has to still be fun to use after you’ve imbibed a fifth of vodka. That’s true of the Apple iPhone and most popular TV shows, but not of any existing programming language — which tend to be notoriously intolerant of errors.
It would be great to have the sort of interface that science fiction writers fantasize about when they create imaginary robots. George Jetson doesn’t need to type in code to get Rosie the Robot to know what he’s talking about, and Luke Skywalker doesn’t require some long-ago-and-far-away Jedi version of Java to give instructions to R2D2. In both cases, they talk to their robots. This may be a fantasy, but it also might contain seeds of a necessary truth.
For certainly anything that will be used by most people will need to be very error tolerant. We need to give people an environment for talking to robots that allows users to make mistakes, and yet still more or less works. People are quite good at learning to find their way through fuzzy systems that respond with some level of consistency (that is in fact a high level description of every toddler’s experience of the world).
And that means that there will need to be a strong element in the system of what programmers call “declarative programming”. You, the user, are allowed to give general rules for what you think your robot should do, and those rules don’t need to be arranged in a rigid order. This is more in line with the way people usually think. If you say “I like my songs arranged with the sad songs first,” then your robot should generally know to put the sad songs first on your song list. You’re not giving it explicit instructions how to do this. Rather, you’re giving it a general rule to influence its behavior.
Generally this means that there will be some kind of software running inside the robot that does “constraint solving” — given constraining rules to work with, the robot comes up with solutions that fit those constraints. There is already an entire subfield of computer science concerned with declarative, constraint based programming, but the available languages, such as CLIPS, Soar and Prolog, are considered tools for A.I. researchers, and generally require an expert user.
While we’re on the subject of A.I., it is important to reiterate that computers are not people. As Ben Shneiderman is fond of pointing out, a computer is closer to a pencil than it is to a person. Our robots might one day develop the sort of “reasoning” process that we associate with humans (many brilliant people have been valiantly trying to climb that mountain for decades now) but there is no guarantee this goal will ever be achieved, and certainly no assurance it will happen in our lifetimes.
Even the software “robots” that the folks at Google incorporate into their Wave project (software agents that lurk behind the scenes to interactively modify and update your screen widgets), are very literal minded, and are generally programmed the old-fashioned way, through a procedural AppBuilder language that is essentially a gloss on such “expert” languages as Java.
In order to create robots that are accessible enough that most people can explain things to them, I think we will need to go back to some of the ideas I discussed two years ago when talking about Theory of Mind, and the great work in this area by Lisa Zunshine and others (about which there was a lovely article the other day in The New York Times). In other words, we will need to develop a Theory of Mind about what robots can and can’t do.
So this is going to be a two way street. Yes, we need to make future robots more accessible to the 95% of the population that is now left out, by adding natural language interfaces that allow people to talk to their robots declaratively (ie: “I like my songs arranged with the sad songs first”). But we will also need to gradually teach people a Theory of Mind about robots, so that we humans properly understand the peculiar nature of this strange new species that we will be learning how to talk to in the years to come.