Demion’s comment on yesterday’s post was very thought provoking. Suppose, with our future augmented reality glasses or contact lenses, we could automatically auto-correct the world around us? Bad jokes on signs would be “promoted” to good ones, visual and architectural design in questionable taste would be replaced by something more satisfying, and so on.
I suspect we won’t actually do this, and my reasoning is by analogy with what we have chosen to do in the past. As human beings, we have exactly one unquestionable super-power: Our shared ability to learn natural languages that support generative grammar. This super-power allows us to communicate with each other in incredibly powerful and subtle ways.
Consequently, the one thing we really care about is to accurately interpret the real intention behind the words and actions of other humans. We don’t always succeed at this task (far from it). Yet it is, nonetheless, the thing we care most about getting right.
After all, if we fail to understand and properly interpret the intentions of others, we find ourselves effectively cut off from other humans, and therefore from our own greatest super-power. Which is why, I posit, we will always resist technological “assistance” that could artificially reduce our ability to accurately interpret the human world around us, as flawed as that world may be.
To give an analogous example, using technology that is already familiar: When you author a document using modern word processing software, you are given the option to turn on auto-correct. If you exercise this option, your errors in spelling are automatically fixed. Also, word processing programs generally highlight questionable or awkward grammatical constructions. You then have the option to reword what you have just written.
But what we never see is document software that shows you the writing of other people with automatic corrections applied to their errors in spelling and grammar. People don’t want to see the errors of others “fixed”. They would rather see what other people actually wrote, with all the idiosyncrasies in place.
Fundamentally, we trust other peoples’ mistakes more than we trust software that might shield us from those mistakes, because what we really want to know is what was going on in the brain of that other human being.
Since we will continue to be human beings in the decades to come, with brains wired more or less the same way they have been for tens of thousands of years, I don’t see that this will change, no matter how far augmented reality technology may develop.


