When you and I are having a face-to-face conversation, the spatial dynamics are fairly simple. Both of us are facing directly toward the other, and at any point in the conversation it’s clear where the focus of attention is.
If we were to use advanced extended reality technology to visually place virtual objects into the scene, we could pretty much always place them half way between you and me, and everything would make perfect sense.
But when four people are sitting around a square table, things are a little more complex. The two people at one corner might be engaged in their own conversation, or three people might be sharing a conversation while the fourth is checking their notes.
So where should those virtual objects go? Does the system need to actively interpret what is going on within our conversation — perhaps who is paying attention to whom — and then make dynamic decisions based on that?
And what if you are talking to me, but I am ignoring you because I am listening to somebody else around the table. What are the best visuals for that situation?
I suspect that much of this will become clear some time in the future, whem multi-participant extended reality has been seamlessly integrated into our everyday conversations.
so many possibilities. I could present myself as “very attentive” to Alan, while actually listening intently to Bill, and for all I’d know, Alan could be representing himself with an AI storyteller to me, while actually talking to Carol