Yesterday Stephan’s comment made a valid point about the sense in which mathematical notation is one dimensional. I think the disagreement, if there is one, is about which semantic level to focus on.

There are statements that are certainly true, yet not at all useful in particular contexts. For example, it is certainly true that humans are made of atoms, but that fact doesn’t provide very much insight about why *Romeo and Juliet* was a tragedy.

Similarly, I think the statement “all mathematical statements can e expressed as a one dimensional string,” while certainly true, is not useful in most contexts. Clearly it *is* useful at the meta-level, where GĂ¶del’s incompleteness theorem resides.

But when you are using mathematical notation to communicate some concept or relationship to a fellow human being, you are rarely operating on that meta-level. In such cases, which by far the great majority of cases, you want to maximize for readability and clarity of thought, and your math notation should ideally express how multiple dimensions of ideas interact with each other.

After all, it is certainly true that if I send you a digital photograph of my cat, the transmitted data can be represented by a one dimensional array of pixel values. And as Stephan points out, such a representation is perfectly adequate for performing a Fourier Transform, digital convolution, or various other mathematical operations.

Yet if we insist on keeping things at that level of interpretation, you may never realize that you are looking at a picture of my cat.

I can’t resist plugging a project, I recently played around with making a tree structured expression editor and wound up with this: https://github.com/hcs64/tiramisu

Don’t know if it is any more comprehensible, it’s just one step away from the 1D cursor-oriented text editor on hardware (touch screens) which enables a more direct interaction with the representation.