Building on a comment by Douglas, one approach to automating generalized fonts could be to create them by example via statistical machine learning.
Imagine, when I was walking by those chess stores in Greenwich Village that started this line of thinking, if I had taken the pawn from every chess set, and had fed all those shapes in to a computer program, with the instruction “this is a pawn”. Now imagine I had done the same for all the bishops, etc.
The software would have two kinds of labeling information to work from: (1) These are all pawns, and (2) This group of pieces (knight, bishop, etc.) are all from the same set.
It would be interesting to see whether statistical machine learning could make use of that labeling. Not just to be able to assert, given a new piece, “this is a bishop”, or “these two pieces belong to the same set”, but to make the more interesting assertion: “Here is a chess set that nobody has ever seen before, which is a weighted mix of these other example sets”.
And it would be interesting to compare this automated approach to the old fashioned one of manually analyzing the structural parts of a chess piece (as I have been doing), in order to build new variations.
Each approach would have its advantages. The statistical machine learning approach could potentially get a lot further faster, but it would be unlikely to ever be able to tell us how and why it made its choices. For actual insight, I believe the manual labeling analysis/synthesis approach will win hands down.
Here’s an example of someone taking the automatic approach:
http://graphics.stanford.edu/~kalo/papers/ShapeSynthesis/index.html
It’s a great paper — I am looking forward to the presentation at SIGGRAPH this summer — and it also illustrates the odd tradeoff with statistical machine learning. It gives you the ability to generate lots of things, which is incredibly useful, but you are left without an understandable model of what is going on.