VR bike

I bought a VirZOOM VR exercise bicycle for the lab — just ordered it on Amazon and it arrived two days later. You ride it like any exercise bike, but it also functions as a VR game controller.

It works just fine with Vive or Oculus or pretty much any VR system, since to your computer’s software it just looks like a game controller. Here it is, in its new place in our Future Reality Lab.

vr_bike

The bike was easy to set up, right out of the box. Everybody in the lab wanted a turn, and all agreed that the experience was great.

Interestingly, everyone who tried it worked up a sweat without even noticing. It’s so much fun that you don’t quite realize you are getting aerobic exercise.

Such a device raises interesting questions. It could be seen as part of a dystopian trend: one more step away from a true experience of reality, à la Ready Player One.

After all, why are you riding your bike around in a virtual world when you could be riding a bike in the real world? Isn’t this just one more step in our technologically enabled retreat from reality?

Or this device could be seen as something positive: a more healthy and body-friendly way to experience immersive entertainment. I favor the latter interpretation, for the following reason.

If movies or theater or reading books were new and unfamiliar, someone could easily argue that they are insidiously removing us from the real world. When you watch a movie or a play, or when you read a book, your true reality is being replace by a fake one.

Yet people generally don’t make that argument these days. The reason is that we now understand the cultural utility of books and theater and cinema. Through these media we do not really abandon our reality, but rather broaden it through temporary immersion in other worlds — worlds that at their best are created by great storytellers.

Imagine if you could have all those cultural benefits while getting in your 30 minutes a day of aerobic workout. To me that sounds like a pretty sweet deal.

The Edge, part 8

Let’s think about a wearable based cyber experience in which all three levels work together: the Far Edge, the Near Edge and the Cloud. But first a quick review.

Your wearable takes care of all of the sensing and display of what you see and hear. The PC that is just a short 5G wireless hop away takes care of how to make sense of that information in the short term.

Meanwhile, the Cloud is building and maintaining the long term narrative. This narrative can take many forms, and is dependent on what you are doing and why.

So here is a scenario (just one of many): Suppose you are interacting with a virtual character on your desk. Your wearable allows you to see and hear that character. Accurate sensing and low latency display lets you experience the character as though it were really there.

Meanwhile your PC is wirelessly updating the low level intelligence of the character multiple times per second. Is the character acting angry, or happy, or confused? Does it approach or avoid objects on your desk, or show interest in other virtual characters?

Behind all this, the Cloud is maintaining a consistent personality and story. Its massive processing resources manage the more semantically challenging questions of why the character is behaving as it does, and assessing what the character might choose to do an hour from now, or next week — or in response to changing world events.

From your perspective, there is only the lifelike character that feels real to you. You develop an emotional attachment to this character both because it is part of your sensory world and because it has the power to surprise you.

Intellectually, you many know that the virtual character on your desk is the product of multiple complementary cybernetic systems in communication with each other. You may even understand much of the architecture that makes this possible.

But none of that will matter if the end result is an emotionally satisfying experience. After all, if you are watching a movie and your mind drifts to wondering what lens they used to film each shot, you’re probably not having the audience experience that the filmmakers were hoping for. 🙂

The Edge, part 7

Think about the way your finger acts when you accidentally touch a burning hot skillet: It pulls away immediately, even before your brain has time to register a sensation of pain.

The part of your nervous system that makes this happen doesn’t know why it is pulling away, or even what a skillet is. Its only concern is immediate survival, and protecting your body from harm.

After you’ve had a moment to get your wits back, you most likely run your finger under cold water, fetch some ointment, put on a protective bandage. Sometime later, reflecting on this incident, you might purchase a nice pair of oven mitts.

Notice that there are three types of thinking involved here: (1) How you reflexively respond in the immediate moment, (2) What you do with conscious awareness immediately after that, and (3) The plans you make for the long term.

Yesterday we talked about the computational equivalent of the first two modes. The Cloud represents the third.

It’s really the combination of all three, working together, that will allow your future wearable to function as an experiential super-computer.

More tomorrow.

The Edge, part 6

The Cloud is a vast supercomputer that you only get to access in short bursts. That’s because you are sharing this supercomputer with everybody else on the planet.

But because the people who run the Cloud are trying to maximize their profit, they are interested in providing the greatest service for the lowest cost to themselves. This leads them to put a lot of effort into load balancing, and making sure that no single request can tie up too many of their resources.

From your perspective as a user of the Cloud, the result is that if you pose a relatively small and contained problem, the Cloud can afford to throw a large amount of computational power into solving your problem, but only for a very short amount of time.

The way this will work with the future of edge computing is that your Near Edge computer will be a kind of go-between between the Cloud and your Far Edge wearable, negotiating between the enormous potential power of the former and the instantaneous needs of the latter.

This is all going to translate into a gradation in the level of semantic functioning across the system — low level semantics at the Far Edge, all the way up to high level semantics in the Cloud.

More on that tomorrow.

The Edge, part 5

Here is the image I have in my mind of what an early version of “two edge computing” might look like:

two_edge_computing

Let’s say you’re wearing some brand or other of SmartGlasses (not quite on sale yet, but coming soon). On the far side of the Edge, your wearable device isn’t going to have enough battery power for super-duper graphics.

So it will use the equivalent of a SnapDragon processor like the one that’s probably in your current SmartPhone. Such processors are specifically designed to work in the low power environment of tiny portable computers.

Somewhere nearby, within easy reach of your 5G wireless connection, will be the Near Edge, in the form of a honking big computer, such as a high end PC. This computer will have a powerful — and power-hungry — co-processor, perhaps an Nvidia processor, which can crunch machine learning computations far faster than anything your wearable could do.

If you hold your hand up in front of your face, your Far Edge wearable device will have enough computational power to realize it is looking at a 3D object. But all it will really be able to do with that information is find outlines and contours, which it will send as bursts of highly compressed data to your Near Edge computer.

That’s where you Near Edge computer’s co-processor will get to work: It will recognize that the object being seen is your hand, figure out the pose and the underlying skeleton, and send that data back to your wearable, also in the form of bursts of highly compressed data.

To you the process will appear seamless: Your hand is now a super-powered controller, able to interact with the augmented world around you in precise and intricate ways.

But that’s just half of the story — the Far Edge and the Near Edge working together. What about the Cloud itself? More on that tomorrow.

The Edge, part 4

The reason for having edge computing in the first place is an inherent asymmetry. In the case of classic edge computing it is an asymmetry in the balance between nearness to sensors and computational power.

For example, the computer attached to a surveillance camera has a high quality connection to a sensor (in this case, a video camera), but a relatively small amount of computational power. On the other hand, the central server network to which that computer is connected has an enormous computational capacity, but a relatively poor connection to the sensor.

And so the two subsystems split up the work of surveillance: The local computer can do initial movement analysis and image compression — tasks that both require relatively little compute power. Then it hands that selected and compressed result to the central network, which has the resources to perform more sophisticated tasks such as comparing a suspicious face against a huge database.

But in the coming few years the inexorable march of Moore’s Law is about to enable a refinement of this paradigm. As people start to wear computational devices in the form of eyewear in their daily lives, different opportunities and constraints will soon arise.

This new paradigm won’t replace existing edge computing. Rather, a second edge will emerge, one which will complement and enhance the one that already exists.

More tomorrow.

The Edge, part 3

Since this is a discussion about edge computing, I’m not focusing so much on what Nero Wolfe does to solve the crime, but rather what Archie Goodwin does to make that possible. In other words, as Moore’s Law continues to up the game, how do we best use our limited resources at the Edge to make the best use of that ever more powerful Cloud which is just a little too far away for instant access?

Even as Moore’s Law keeps changing things up, the laws of physics remain immutable, which means that some things never change. For example, if you plug your fancy PC into the wall, you have hundreds of watts of power to play with, and heat dissipation isn’t a major problem.

But anything you carry in your pocket or wear on your head is not going to be able to draw more than a few watts of power. And even if it could, heat dissipation would quickly make things very uncomfortable.

This means that the computational power of that fancy PC under your desk is always going to be about 10 years ahead of anything you can carry with you. And the computational power you can draw from the Cloud will easily be 10 years beyond that.

If we look at all this in terms of Moore’s Law, we’re asking different parts of our computational infrastructure, from the Edge to the Cloud, to work together across different eras of the computer age. It’s as though we’re asking H. G. Wells’ Time Traveller to collaborate with Neo from The Matrix.

The Edge, part 2

Thanks for the comments on yesterday’s post! I’m hoping my comment in response helped to clarify things, and I will just start from there.

I think of the “edge” part of edge computing as analogous to the character of Archie Goodwin in Nero Wolfe, or the tail on a Stegosaurus. It’s a second brain which lets the system respond right away, at the very moment some response is needed. But it doesn’t replace the main brain — it just accommodates the fact that the main brain is further away, and so might not be able to respond immediately.

Yet what exactly constitutes the edge of a computing system is a moving target. After all, looming over our entire Age of Computers is the shadow of Gordon Moore.

His formulation of “Moore’s Law” in 1965 has proven to be eerily prescient. Many cybernetic innovations enter the world not because we’re getting smarter over time, but because computers grow about a thousand times more powerful every fifteen years or so.

Which means that local processing capability within the context of a larger connected network of powerful computers is a moving target. Over time, our definition of both “local processing capability” and “powerful computers” continues to evolve.

After all, as a great Jedi knight once observed about the hazards of trying to predict forthcoming events: “Difficult to see. Always in motion is the future.”

The Edge, part 1

One of the terms thrown around a lot these days in computing circles is “edge computing”. You experience edge computing every time you talk into your SmartPhone and Google converts what you’ve just said into text.

In that case, the audio of your voice streams to a Google server, where an extremely powerful computer uses complex algorithms to convert that audio into meaningful written sentences. The interesting part of this is that the level of processing done on that server is far greater than anything your phone could do on its own.

Essentially, the computer in your phone is acting as a gateway to a vastly more powerful computing network. Because your phone is on the “edge” of that powerful network, in short bursts you can get access to far more computational power than would be possible using just that little box in your pocket.

As edge computing advances in the next few years, the experience of reality itself will be fundamentally altered for many millions of people. More tomorrow.