It’s 2032. You and I are walking to a restaurant together in Manhattan. Mobile Google Maps is on our eyeglasses, which means we are seeing it in the world, on the sides of buildings, on street signs, wherever is a convenient place for both of us to look.
Because we don’t need to look at our phones to see the route, we can continue focusing on our conversation with one another, without worrying that we will take a wrong turn. This is fundamentally different from the experience of mobile Google Maps today, which requires you to pay at least some attention to your phone app.
When we get to the restaurant, there is no menu or QR code, or any other artifact from the past. We both see the array of food choices laid out for us on the table. We use speech and hand gestures in natural ways to customize our choices.
By the time we order, the food we ordered is already on the table, just the way we like it. We just can’t eat it yet.
A few minutes later, our food has arrived. It looks just the same as it did before, only now we can eat it. Which is nice. 🙂