The other day I was invited to a party where almost everyone was a philosopher. I don’t mean amateur philosopher, armchair philosopher, or reflective soul with a philosophical bent. I mean they were professional philosophers – people who do this for a living. Many of them were connected with the NYU Philosophy Department (one of the top philosophy departments in the world, as it turns out) and others were colleagues and collaborators of these folks from other institutions of higher learning around the world.
I found out, in the course of conversation, that a rather high percentage of these people focus on questions surrounding “theory of mind” – in which one looks at questions on the order of what is a human mind, what is consciousness, what is thought, what is self?
The friend/colleague who invited me to the party is something else – a psychologist. Therefore he looks at theory of mind questions from a different angle, one more related to the sorts of questions we ask in computer science: How the mind operates from a somewhat cybernetic perspective, as an extremely advanced sort of computational device. If I understand correctly, it seems that an essential difference between the philosophical and psychological views of humanity come down to the question of “can we build one?”
I don’t mean can we build one now. Enough is already known about how the human brain functions to make it clear that in 2009 there is simply not enough computational power in all the world’s silicon chips to replicate the functioning of even a single brain. But of course that might not always be true. So psychologists are tempted to look at a time in the future – perhaps 50 years from now, perhaps 500 years from now – when something on the order of the brain’s level of functional complexity can be replicated in silico.
Philosophers, unlike psychologists, are not exactly interested in the mechanism itself, but rather in what that would mean. Would we be replicating the essential nature of the brain, the aspect that we think of as humanity, and if so, would that mean we can codify humanity the way we currently codify computer software?
I also found that that both psychologists and philosophers ponder the future implications of this question in a very specific way: If human brain functioning – “thought”, if you will – could one day be replicated in computer circuitry, then could those future electronic humans make their own cyber-progeny, second generation artificial thought machines? And would their progeny then go on to make third, fourth, fifth generation machines, ad infinitum?
And if so, at what point would the descendents no longer be recognizably human? At what point would such creatures cease to feel any need to keep us silly humans around, even as quaint biological specimens of an outdated ancestral brain?
Here’s the kicker: On the above subject, it seems that there are “optimists” and “pessimists”. The optimists believe that it is indeed possible to create such generative species of artificially intelligent creatures. The pessimists believe that it is highly unlikely such a thing will happen in the foreseeable future.
The friend who invited me to the party is an optimist, and so he is quite morose on the subject. He believes it may be only a matter of time before our human species is replaced by an uncaring cyber-progeny that has evolved beyond our limited powers of recognition, a meta-species that will ultimately cast us aside altogether, once we no longer serve its unfathomable purposes.
I, on the other hand, find that I am a pessimist on the subject. And so I remain quite happy and carefree, fascinated as I may be by the gloomy and dire predictions of my sad friends, the optimists.
Your joke made me smile, but I’m also curious why you believe as you do. Is it because, like John Searle, you believe that brains are the right sort of stuff to cause subjective, conscious thought (and silicon isn’t) or because you feel, like Douglas Hofstadter, that others are vastly underestimating the subtlety and complexity of the brain and overestimating the pace of progress on AI? Or perhaps some other reason?
Definitely more Hofstadter than Serle. Even when we can duplicate a brain’s functionality precisely in silicon by mimicking its individual synapses, that still will not give us a model of what is going on. We are, as of now, very far from being able to reproduce the sorts of operations that the brain performs. We get spectacular results in such areas as statistical machine learning, but those sorts of data driven “black box” approaches can never lead to the kind of context free model building at which the brain excels. Attempts to build semantic inference engines that can deal with arbitrary new contexts have not succeeded so far. in spite of decades of work. When somebody shows real success in that direction, I’ll revise my views and become a sad optimist.
OK – just a personal anecdote. Last night I had an extremely vivid dream of my Father (deceased) in our family home (since sold). He explained he planned to move out of this home in December. Every detail of the home was vivid, and my father was precisely himself (rather healthier) in later life. I wondered – what is my “brain” that it could create such a marvelous, utterly detailed SIM with an application (my father) running so flawlessly and interactively in real-time? How the “dream” performed a responsive application – my father – thrilled me. I would love to externalize this somehow… speaking as an artist from embodied experience, I am a strangely happy optimist about making things that are brain-like…