The Edge, part 7

March 21st, 2018

Think about the way your finger acts when you accidentally touch a burning hot skillet: It pulls away immediately, even before your brain has time to register a sensation of pain.

The part of your nervous system that makes this happen doesn’t know why it is pulling away, or even what a skillet is. Its only concern is immediate survival, and protecting your body from harm.

After you’ve had a moment to get your wits back, you most likely run your finger under cold water, fetch some ointment, put on a protective bandage. Sometime later, reflecting on this incident, you might purchase a nice pair of oven mitts.

Notice that there are three types of thinking involved here: (1) How you reflexively respond in the immediate moment, (2) What you do with conscious awareness immediately after that, and (3) The plans you make for the long term.

Yesterday we talked about the computational equivalent of the first two modes. The Cloud represents the third.

It’s really the combination of all three, working together, that will allow your future wearable to function as an experiential super-computer.

More tomorrow.

The Edge, part 6

March 20th, 2018

The Cloud is a vast supercomputer that you only get to access in short bursts. That’s because you are sharing this supercomputer with everybody else on the planet.

But because the people who run the Cloud are trying to maximize their profit, they are interested in providing the greatest service for the lowest cost to themselves. This leads them to put a lot of effort into load balancing, and making sure that no single request can tie up too many of their resources.

From your perspective as a user of the Cloud, the result is that if you pose a relatively small and contained problem, the Cloud can afford to throw a large amount of computational power into solving your problem, but only for a very short amount of time.

The way this will work with the future of edge computing is that your Near Edge computer will be a kind of go-between between the Cloud and your Far Edge wearable, negotiating between the enormous potential power of the former and the instantaneous needs of the latter.

This is all going to translate into a gradation in the level of semantic functioning across the system — low level semantics at the Far Edge, all the way up to high level semantics in the Cloud.

More on that tomorrow.

The Edge, part 5

March 19th, 2018

Here is the image I have in my mind of what an early version of “two edge computing” might look like:


Let’s say you’re wearing some brand or other of SmartGlasses (not quite on sale yet, but coming soon). On the far side of the Edge, your wearable device isn’t going to have enough battery power for super-duper graphics.

So it will use the equivalent of a SnapDragon processor like the one that’s probably in your current SmartPhone. Such processors are specifically designed to work in the low power environment of tiny portable computers.

Somewhere nearby, within easy reach of your 5G wireless connection, will be the Near Edge, in the form of a honking big computer, such as a high end PC. This computer will have a powerful — and power-hungry — co-processor, perhaps an Nvidia processor, which can crunch machine learning computations far faster than anything your wearable could do.

If you hold your hand up in front of your face, your Far Edge wearable device will have enough computational power to realize it is looking at a 3D object. But all it will really be able to do with that information is find outlines and contours, which it will send as bursts of highly compressed data to your Near Edge computer.

That’s where you Near Edge computer’s co-processor will get to work: It will recognize that the object being seen is your hand, figure out the pose and the underlying skeleton, and send that data back to your wearable, also in the form of bursts of highly compressed data.

To you the process will appear seamless: Your hand is now a super-powered controller, able to interact with the augmented world around you in precise and intricate ways.

But that’s just half of the story — the Far Edge and the Near Edge working together. What about the Cloud itself? More on that tomorrow.

The Edge, part 4

March 18th, 2018

The reason for having edge computing in the first place is an inherent asymmetry. In the case of classic edge computing it is an asymmetry in the balance between nearness to sensors and computational power.

For example, the computer attached to a surveillance camera has a high quality connection to a sensor (in this case, a video camera), but a relatively small amount of computational power. On the other hand, the central server network to which that computer is connected has an enormous computational capacity, but a relatively poor connection to the sensor.

And so the two subsystems split up the work of surveillance: The local computer can do initial movement analysis and image compression — tasks that both require relatively little compute power. Then it hands that selected and compressed result to the central network, which has the resources to perform more sophisticated tasks such as comparing a suspicious face against a huge database.

But in the coming few years the inexorable march of Moore’s Law is about to enable a refinement of this paradigm. As people start to wear computational devices in the form of eyewear in their daily lives, different opportunities and constraints will soon arise.

This new paradigm won’t replace existing edge computing. Rather, a second edge will emerge, one which will complement and enhance the one that already exists.

More tomorrow.

The Edge, part 3

March 17th, 2018

Since this is a discussion about edge computing, I’m not focusing so much on what Nero Wolfe does to solve the crime, but rather what Archie Goodwin does to make that possible. In other words, as Moore’s Law continues to up the game, how do we best use our limited resources at the Edge to make the best use of that ever more powerful Cloud which is just a little too far away for instant access?

Even as Moore’s Law keeps changing things up, the laws of physics remain immutable, which means that some things never change. For example, if you plug your fancy PC into the wall, you have hundreds of watts of power to play with, and heat dissipation isn’t a major problem.

But anything you carry in your pocket or wear on your head is not going to be able to draw more than a few watts of power. And even if it could, heat dissipation would quickly make things very uncomfortable.

This means that the computational power of that fancy PC under your desk is always going to be about 10 years ahead of anything you can carry with you. And the computational power you can draw from the Cloud will easily be 10 years beyond that.

If we look at all this in terms of Moore’s Law, we’re asking different parts of our computational infrastructure, from the Edge to the Cloud, to work together across different eras of the computer age. It’s as though we’re asking H. G. Wells’ Time Traveller to collaborate with Neo from The Matrix.

The Edge, part 2

March 16th, 2018

Thanks for the comments on yesterday’s post! I’m hoping my comment in response helped to clarify things, and I will just start from there.

I think of the “edge” part of edge computing as analogous to the character of Archie Goodwin in Nero Wolfe, or the tail on a Stegosaurus. It’s a second brain which lets the system respond right away, at the very moment some response is needed. But it doesn’t replace the main brain — it just accommodates the fact that the main brain is further away, and so might not be able to respond immediately.

Yet what exactly constitutes the edge of a computing system is a moving target. After all, looming over our entire Age of Computers is the shadow of Gordon Moore.

His formulation of “Moore’s Law” in 1965 has proven to be eerily prescient. Many cybernetic innovations enter the world not because we’re getting smarter over time, but because computers grow about a thousand times more powerful every fifteen years or so.

Which means that local processing capability within the context of a larger connected network of powerful computers is a moving target. Over time, our definition of both “local processing capability” and “powerful computers” continues to evolve.

After all, as a great Jedi knight once observed about the hazards of trying to predict forthcoming events: “Difficult to see. Always in motion is the future.”

The Edge, part 1

March 15th, 2018

One of the terms thrown around a lot these days in computing circles is “edge computing”. You experience edge computing every time you talk into your SmartPhone and Google converts what you’ve just said into text.

In that case, the audio of your voice streams to a Google server, where an extremely powerful computer uses complex algorithms to convert that audio into meaningful written sentences. The interesting part of this is that the level of processing done on that server is far greater than anything your phone could do on its own.

Essentially, the computer in your phone is acting as a gateway to a vastly more powerful computing network. Because your phone is on the “edge” of that powerful network, in short bursts you can get access to far more computational power than would be possible using just that little box in your pocket.

As edge computing advances in the next few years, the experience of reality itself will be fundamentally altered for many millions of people. More tomorrow.

Great idea for a TV comedy

March 14th, 2018

Recent events have given me an intriguing idea for a TV comedy series. Imagine, if you will, that ambitious celestial beings get the opportunity to work for Lucifer — the big honcho Himself.

But it isn’t easy for a fallen Angel to keep old Beelzebub happy. You’ve got to be Evil. Not merely evil with a small “e” — that won’t cut it in the Satanic administrative order.

Of course it’s ok to be incompetent and corrupt. That’s where a lot of the comedy comes in.

In fact, our audience quickly learns that in the Devil’s court, double-dealing, nepotism, shameless self-promotion, sexual shenanigans and outrageous vices in general are not only tolerated, but celebrated. If you weren’t all of those things, why the hell would you be serving the Dark Lord in the first place?

But a failure to be willing to be truly Evil will get you tossed out of the Underworld faster than you can say “news cycle”.

One great thing about this idea for a TV comedy is the opportunity for various spin-off shows. After all, we all know by now that American audiences have a soft spot in their hearts for deeply flawed characters.

So for the Seraphs who get kicked out of Hell, for those poor unfortunate souls who have failed to demonstrate sufficiently pure Evilness, we can create an alternate goal: Now they are trying to get back to the Other Place.

But that turns out to be much more difficult. Which is awesome, because it means these spinoff shows can keep going for years.

Fortunately, all we’ll need is a seven year run to make it into syndication.


March 13th, 2018

There is an old and very profound expression, which I have invariably found to be true: “You never learn from your successes.”

A lot of wisdom is packed into that deceptively simple sentence. It’s not our successes that help us get better, but our failures.

Case in point: A few weeks ago my colleagues and I gave a demo of some software we are working on. We were really hoping to impress the people we were showing it to. And in my opinion, we fell flat on our faces.

Our demo wasn’t a total failure. Some of it was actually ok. But merely ok wasn’t good enough. Merely ok wasn’t going to cut it.

So for the last few weeks we’ve been working very hard, using that experience as a guide to help us know what not to do. Then today we gave another demo of our software.

And it was awesome — totally, knock out of the ballpark awesome. The best part is, we’re not even done yet. Now that we have momentum, the demo is getting better every day, by leaps and bounds.

That’s the great thing about failure. Any time you hit bottom, you might find an opportunity to bounce.

Morning me / evening me

March 12th, 2018

Everybody seems to have a particular time of day when they can get the most work done.

In the morning, right after my cup of coffee, I’m at my most productive. I can take on daring new software tasks, and sometimes polish off a day’s work by noon.

In the evening, not so much. I can certainly put one foot in front of the other and write some code, but I can’t usually do the sort of acrobatic algorithm jockeying that morning me can do.

Still, deadlines are deadlines, and you’ve got to keep pushing forward. Which is why, in the morning, I make lists of stuff to do in the evening.

I’m not talking about the hard stuff, but the stupid stuff. The evening is when I get around to all those rote tasks that have to get done, yet don’t require much in the way of imagination.

When it comes to developing software and new algorithms, morning me is the one who designs the great recipes. Evening me is happy just to get the stove working and make sure the refrigerator is fully stocked.