After the XR/AI singularity, when everyone is wearing those future glasses, your personal AI assistant will know a lot about all of the physical objects that you can see. I wonder how that knowledge will be used in practice.
For example, will you choose to add annotations to your morning coffee? When you look at your coffee cup, a neat little text might pop up just above it, giving you all sort of useful information.
How fresh were the beans? What is the actual blend, and how caffeinated is it? How hot is your coffee at the moment? And how much more is in the pot?
All of the objects in your lives could be similarly annotated. Where is that screwdriver, the one which fits the particular screw that you are looking at? How much would it cost to add another shelf to that wall hanging bookshelf, and how soon could it be installed?
Will your relationship to your physical objects change once you are able to query them? Or will you just get tired of the whole thing and take off those darned glasses?