The other day I was invited to a party where almost everyone was a philosopher. I don’t mean amateur philosopher, armchair philosopher, or reflective soul with a philosophical bent. I mean they were professional philosophers – people who do this for a living. Many of them were connected with the NYU Philosophy Department (one of the top philosophy departments in the world, as it turns out) and others were colleagues and collaborators of these folks from other institutions of higher learning around the world.
I found out, in the course of conversation, that a rather high percentage of these people focus on questions surrounding “theory of mind” – in which one looks at questions on the order of what is a human mind, what is consciousness, what is thought, what is self?
The friend/colleague who invited me to the party is something else – a psychologist. Therefore he looks at theory of mind questions from a different angle, one more related to the sorts of questions we ask in computer science: How the mind operates from a somewhat cybernetic perspective, as an extremely advanced sort of computational device. If I understand correctly, it seems that an essential difference between the philosophical and psychological views of humanity come down to the question of “can we build one?”
I don’t mean can we build one now. Enough is already known about how the human brain functions to make it clear that in 2009 there is simply not enough computational power in all the world’s silicon chips to replicate the functioning of even a single brain. But of course that might not always be true. So psychologists are tempted to look at a time in the future – perhaps 50 years from now, perhaps 500 years from now – when something on the order of the brain’s level of functional complexity can be replicated in silico.
Philosophers, unlike psychologists, are not exactly interested in the mechanism itself, but rather in what that would mean. Would we be replicating the essential nature of the brain, the aspect that we think of as humanity, and if so, would that mean we can codify humanity the way we currently codify computer software?
I also found that that both psychologists and philosophers ponder the future implications of this question in a very specific way: If human brain functioning – “thought”, if you will – could one day be replicated in computer circuitry, then could those future electronic humans make their own cyber-progeny, second generation artificial thought machines? And would their progeny then go on to make third, fourth, fifth generation machines, ad infinitum?
And if so, at what point would the descendents no longer be recognizably human? At what point would such creatures cease to feel any need to keep us silly humans around, even as quaint biological specimens of an outdated ancestral brain?
Here’s the kicker: On the above subject, it seems that there are “optimists” and “pessimists”. The optimists believe that it is indeed possible to create such generative species of artificially intelligent creatures. The pessimists believe that it is highly unlikely such a thing will happen in the foreseeable future.
The friend who invited me to the party is an optimist, and so he is quite morose on the subject. He believes it may be only a matter of time before our human species is replaced by an uncaring cyber-progeny that has evolved beyond our limited powers of recognition, a meta-species that will ultimately cast us aside altogether, once we no longer serve its unfathomable purposes.
I, on the other hand, find that I am a pessimist on the subject. And so I remain quite happy and carefree, fascinated as I may be by the gloomy and dire predictions of my sad friends, the optimists.