If you ask ChatGPT these days to write a computer program, it often does a remarkably good job. Not only will it implement an algorithm for you nearly instantly, but it will also give you a good explanation for how that algorithm works, and why it took the approach that it did.
But given that all programs written by generative AI are pastiches — bits and pieces from a vast training set of existing human implementations cobbled together — I wonder whether there are specific limits to what genAI can do.
In particular, are there prompts you can give it that will always fail? Are there particular kinds of computer programs that a generative AI simply cannot write, either because they are outside that training set or else because they also call for a form of reasoning that is uniquely human?
Yes, novel stuff, where the LLM doesn’t have any source to ‘steal and launder’ it from. I recently needed some algorithm to do with bucketing the geo data (I can’t remember the specifics), and I remember GPT4 being absolutely clueless, in the not-even-wrong way, like an extremely junior person being asked to do a task he doesn’t even understand.
Out of distribution data.