I read an article in the New York times this week about OpenAI’s GPT-3 — a supercomputer specifically designed to learn, without explicitly being told how, to write proper English prose.
By using a very practical form of A.I. technique called Convolutional Neural Nets (CNNs), and after training on extremely massive quantities of human prose, the program can answer all sorts questions sensibly in what looks like very cogent English.
Which is all well and good. The problem I had was with the sensational way that the article was written.
Steven Johnson, who is himself a very good writer, gave the story a persistently sensational slant. He interviewed one expert after another, and all of them said the same thing: This is not at all an example of intelligence in the human sense.
It is, rather, extremely advanced mimicry. The computer has absolutely no self-awareness or consciousness. It is simply processing data.
But that would not have made for as fun a story. So we are introduced to the tantalizing “possibility” that we are witnessing the emergence of intelligence.
But in fact we are not. CNNs, while very useful, are not in any sense sentient beings. In human terms, they have all the smarts of a doorknob.
Which could have been made crystal clear in the article, for the benefit of non-expert readers. But I guess that wouldn’t have made for as fun a story.