A lot of people are confused about the nature of today’s machine learning systems. There seems to be a widespread belief that these giant pattern matching and mimicry systems are actually intelligent, in the way that people or dogs or sea slugs are intelligent.
But the more you learn about how such systems work, the more you understand that they are not at all intelligent, in the sense that the word “intelligent” is used to describe a living being. They are very impressive, but then a lot of things are impressive.
For example, an automobile can outrun a human in a race, but we don’t put automobiles and humans in the same category. We understand that they are operating by completely different means, and so we know to keep them separate in our heads.
But most people are not familiar with how machine learning systems actually work. So an ML system’s apparently magical powers of mimicry can easily be misconstrued as a sign of emerging sentience.
To be fair, what is going on under the hood is very technical, so it’s not easy to understand how these systems really work. It would be unreasonable to ask everyone to take an advanced course in computer science.
But once you realize that there is no actual intelligence at work here — just a very impressive feat of pattern matching and generative imitation — you start to count your blessings. It would be much worse for us humans if these things could actually think, wouldn’t it?