I’ve been thinking about CC’s comments on my recent post about computers and artificial intelligence. And it brings up an interesting question in ethics:
Suppose we had every reason to believe, due to some unforeseen breakthrough in artificial intelligence research, that computers would, in our own lifetime, first reach and then far surpass our own intelligence (and here I mean “intelligence” in the human sense).
Would we have an ethical obligation to teach those emerging entities, to protect them, guide them, help them as they travel along their path? After all, in a very real sense we would be their parents.
Or would we have a greater obligation to ourselves, our own human kind? If we knew that in a few short decades their intelligence would be to ours as our intelligence is to that of a rat, would we try to block their development — or even their very existence?
One reason this is an intriguing question is that humans have come to highly value nature’s experiment in human intelligence. Naturally enough, we see our own intellectual capacity as a kind of pinnacle of evolution. So in one sense we might be inclined to see that experiment go as far as it can.
On the other hand, we might just decide “To hell with this — I’m not going to let my species get replaced by some machine.” That too would be a very human response. 🙂