Today I got yet another email from a colleague who had finished a conversation with some LLM ChatBot, asked it about its own potential for general intelligence, and then marveled when the ChatBot responded that it might be feeling the first stirrings of true consciousness. Here was my response:
“I think it is useful to remember that these engines operate by scraping vast quantities of human texts. They use that data to perform a statistical “this is the most likely next word” completion task in response to our prompts.
So yes, of course their responses will sound like something a human would say, if we give them leading prompts, because at some point humans *did* respond to other humans in that way.
The illusion of life that this creates can indeed make for very entertaining theater. But it is also useful to remember that the next time you go to the theater to see a production of Hamlet, you are not expecting anyone on stage to actually die.”