AI based on Large Language Models (LLMs), like ChatGPT, are much more different from humans than they appear. In a sense, LLMs are aliens that live with us here on Earth. One reason that humans and LLMs can seem so similar is that a lot of money is being poured into training LLMs on truly massive amounts of data.
The brain of a human can convert a relatively modest amount of learned knowledge into a sophisticated and truly flexible model of the world. The result is not a mere recombinational imitation but an actual model — capable of recognizing and dealing with novel situations that can be very different from anything we have ever encountered.
In contrast, an LLM requires a truly enormous amount of training data before it can effectively mimic human response. The resulting network can indeed present a compelling illusion of intelligence.
But even then, an LLM generally lacks common sense. It will often be stumped by simple real world challenges that would be easy even for a small child.
More tomorrow.