The other day I touched on the difference between Google Glass and the (highly conjectural) concept of “Matrix”-style direct to brain knowledge downloading. The ensuing discussion has been very interesting.
One thing that has come out of that discussion is the question of how best to use Glass-style augmentation in the context of an ongoing conversation. It might very well turn out that as augmentation technology matures, the user experience will evolve into a kind of continuous background process, never quite requiring our attention, but rather feeding unobtrusively into all our conversations.
In this kind of scenario, the true power-up technology will not be the display itself, but the coordinated pipeline of automatic speech recognition, contextual look-up, and semantic inference that will listen to our conversations and provide immediate information in response.
In essence, our augmentation devices will “whisper in our ear”, feeding us new information that is informed by what was just discussed, and that is optimized for suggesting new and fruitful conversational directions. If such technology is working properly, discussants need never focus on the existence of this back-channel.
Such a real-time knowledge supplement stream would be the conversational equivalent of wearing contact lenses or a hearing aid, or having an air conditioner. For if such augmentation does its job properly, it will become as clear as glass — and we will forget it is there. We will simply become better and more informed conversationalists, while forgetting that there is any technology involved.