Currently, since only a relatively small percentage of the populace can program computers, it is considered “normal” to not be able to program.
Let us suppose, for the sake of argument, that this will change over the course of the next, say, five to eight years. For example, if there were a successful shift in 9-12 education in this decade, then computer programming could become a skill possessed by all high school graduates.
This would not come about by “teaching them to program”. Rather, it would come about because programming becomes seen as an integral part of how all subjects are taught in high school — from literature to math and science to social studies to music. In this scenario, it will just be assumed that programming is part of the normal course of literate development, just as now, in the U.S., we view a steady progression in proficiency of reading and writing English.
We are not talking, of course, about the way programming is generally taught now, but rather about a much more integrated and user friendly approach that would be better aligned with student interests and ways of learning.
In this scenario, it would then become “normal” to be able to program, and “not normal” to not be able to program. Of course this could create a generational rift, since many older people still might not program. Although in the case of earlier technological shifts — such as the development of on-line social networks and the smart phone — older people have often followed the lead of younger early adopters.
In any case, at some point as norms shift, a person who continues to avoid learning to program might be seen as suffering from a recognized psychological abnormality, which might be called “codephobia”.