Archive for the ‘Uncategorized’ Category

The Edge, part 5

Monday, March 19th, 2018

Here is the image I have in my mind of what an early version of “two edge computing” might look like:


Let’s say you’re wearing some brand or other of SmartGlasses (not quite on sale yet, but coming soon). On the far side of the Edge, your wearable device isn’t going to have enough battery power for super-duper graphics.

So it will use the equivalent of a SnapDragon processor like the one that’s probably in your current SmartPhone. Such processors are specifically designed to work in the low power environment of tiny portable computers.

Somewhere nearby, within easy reach of your 5G wireless connection, will be the Near Edge, in the form of a honking big computer, such as a high end PC. This computer will have a powerful — and power-hungry — co-processor, perhaps an Nvidia processor, which can crunch machine learning computations far faster than anything your wearable could do.

If you hold your hand up in front of your face, your Far Edge wearable device will have enough computational power to realize it is looking at a 3D object. But all it will really be able to do with that information is find outlines and contours, which it will send as bursts of highly compressed data to your Near Edge computer.

That’s where you Near Edge computer’s co-processor will get to work: It will recognize that the object being seen is your hand, figure out the pose and the underlying skeleton, and send that data back to your wearable, also in the form of bursts of highly compressed data.

To you the process will appear seamless: Your hand is now a super-powered controller, able to interact with the augmented world around you in precise and intricate ways.

But that’s just half of the story — the Far Edge and the Near Edge working together. What about the Cloud itself? More on that tomorrow.

The Edge, part 4

Sunday, March 18th, 2018

The reason for having edge computing in the first place is an inherent asymmetry. In the case of classic edge computing it is an asymmetry in the balance between nearness to sensors and computational power.

For example, the computer attached to a surveillance camera has a high quality connection to a sensor (in this case, a video camera), but a relatively small amount of computational power. On the other hand, the central server network to which that computer is connected has an enormous computational capacity, but a relatively poor connection to the sensor.

And so the two subsystems split up the work of surveillance: The local computer can do initial movement analysis and image compression — tasks that both require relatively little compute power. Then it hands that selected and compressed result to the central network, which has the resources to perform more sophisticated tasks such as comparing a suspicious face against a huge database.

But in the coming few years the inexorable march of Moore’s Law is about to enable a refinement of this paradigm. As people start to wear computational devices in the form of eyewear in their daily lives, different opportunities and constraints will soon arise.

This new paradigm won’t replace existing edge computing. Rather, a second edge will emerge, one which will complement and enhance the one that already exists.

More tomorrow.

The Edge, part 3

Saturday, March 17th, 2018

Since this is a discussion about edge computing, I’m not focusing so much on what Nero Wolfe does to solve the crime, but rather what Archie Goodwin does to make that possible. In other words, as Moore’s Law continues to up the game, how do we best use our limited resources at the Edge to make the best use of that ever more powerful Cloud which is just a little too far away for instant access?

Even as Moore’s Law keeps changing things up, the laws of physics remain immutable, which means that some things never change. For example, if you plug your fancy PC into the wall, you have hundreds of watts of power to play with, and heat dissipation isn’t a major problem.

But anything you carry in your pocket or wear on your head is not going to be able to draw more than a few watts of power. And even if it could, heat dissipation would quickly make things very uncomfortable.

This means that the computational power of that fancy PC under your desk is always going to be about 10 years ahead of anything you can carry with you. And the computational power you can draw from the Cloud will easily be 10 years beyond that.

If we look at all this in terms of Moore’s Law, we’re asking different parts of our computational infrastructure, from the Edge to the Cloud, to work together across different eras of the computer age. It’s as though we’re asking H. G. Wells’ Time Traveller to collaborate with Neo from The Matrix.

The Edge, part 2

Friday, March 16th, 2018

Thanks for the comments on yesterday’s post! I’m hoping my comment in response helped to clarify things, and I will just start from there.

I think of the “edge” part of edge computing as analogous to the character of Archie Goodwin in Nero Wolfe, or the tail on a Stegosaurus. It’s a second brain which lets the system respond right away, at the very moment some response is needed. But it doesn’t replace the main brain — it just accommodates the fact that the main brain is further away, and so might not be able to respond immediately.

Yet what exactly constitutes the edge of a computing system is a moving target. After all, looming over our entire Age of Computers is the shadow of Gordon Moore.

His formulation of “Moore’s Law” in 1965 has proven to be eerily prescient. Many cybernetic innovations enter the world not because we’re getting smarter over time, but because computers grow about a thousand times more powerful every fifteen years or so.

Which means that local processing capability within the context of a larger connected network of powerful computers is a moving target. Over time, our definition of both “local processing capability” and “powerful computers” continues to evolve.

After all, as a great Jedi knight once observed about the hazards of trying to predict forthcoming events: “Difficult to see. Always in motion is the future.”

The Edge, part 1

Thursday, March 15th, 2018

One of the terms thrown around a lot these days in computing circles is “edge computing”. You experience edge computing every time you talk into your SmartPhone and Google converts what you’ve just said into text.

In that case, the audio of your voice streams to a Google server, where an extremely powerful computer uses complex algorithms to convert that audio into meaningful written sentences. The interesting part of this is that the level of processing done on that server is far greater than anything your phone could do on its own.

Essentially, the computer in your phone is acting as a gateway to a vastly more powerful computing network. Because your phone is on the “edge” of that powerful network, in short bursts you can get access to far more computational power than would be possible using just that little box in your pocket.

As edge computing advances in the next few years, the experience of reality itself will be fundamentally altered for many millions of people. More tomorrow.

Great idea for a TV comedy

Wednesday, March 14th, 2018

Recent events have given me an intriguing idea for a TV comedy series. Imagine, if you will, that ambitious celestial beings get the opportunity to work for Lucifer — the big honcho Himself.

But it isn’t easy for a fallen Angel to keep old Beelzebub happy. You’ve got to be Evil. Not merely evil with a small “e” — that won’t cut it in the Satanic administrative order.

Of course it’s ok to be incompetent and corrupt. That’s where a lot of the comedy comes in.

In fact, our audience quickly learns that in the Devil’s court, double-dealing, nepotism, shameless self-promotion, sexual shenanigans and outrageous vices in general are not only tolerated, but celebrated. If you weren’t all of those things, why the hell would you be serving the Dark Lord in the first place?

But a failure to be willing to be truly Evil will get you tossed out of the Underworld faster than you can say “news cycle”.

One great thing about this idea for a TV comedy is the opportunity for various spin-off shows. After all, we all know by now that American audiences have a soft spot in their hearts for deeply flawed characters.

So for the Seraphs who get kicked out of Hell, for those poor unfortunate souls who have failed to demonstrate sufficiently pure Evilness, we can create an alternate goal: Now they are trying to get back to the Other Place.

But that turns out to be much more difficult. Which is awesome, because it means these spinoff shows can keep going for years.

Fortunately, all we’ll need is a seven year run to make it into syndication.


Tuesday, March 13th, 2018

There is an old and very profound expression, which I have invariably found to be true: “You never learn from your successes.”

A lot of wisdom is packed into that deceptively simple sentence. It’s not our successes that help us get better, but our failures.

Case in point: A few weeks ago my colleagues and I gave a demo of some software we are working on. We were really hoping to impress the people we were showing it to. And in my opinion, we fell flat on our faces.

Our demo wasn’t a total failure. Some of it was actually ok. But merely ok wasn’t good enough. Merely ok wasn’t going to cut it.

So for the last few weeks we’ve been working very hard, using that experience as a guide to help us know what not to do. Then today we gave another demo of our software.

And it was awesome — totally, knock out of the ballpark awesome. The best part is, we’re not even done yet. Now that we have momentum, the demo is getting better every day, by leaps and bounds.

That’s the great thing about failure. Any time you hit bottom, you might find an opportunity to bounce.

Morning me / evening me

Monday, March 12th, 2018

Everybody seems to have a particular time of day when they can get the most work done.

In the morning, right after my cup of coffee, I’m at my most productive. I can take on daring new software tasks, and sometimes polish off a day’s work by noon.

In the evening, not so much. I can certainly put one foot in front of the other and write some code, but I can’t usually do the sort of acrobatic algorithm jockeying that morning me can do.

Still, deadlines are deadlines, and you’ve got to keep pushing forward. Which is why, in the morning, I make lists of stuff to do in the evening.

I’m not talking about the hard stuff, but the stupid stuff. The evening is when I get around to all those rote tasks that have to get done, yet don’t require much in the way of imagination.

When it comes to developing software and new algorithms, morning me is the one who designs the great recipes. Evening me is happy just to get the stove working and make sure the refrigerator is fully stocked.

Inter-tribal auto-corrected reality

Sunday, March 11th, 2018

Demian makes a very good point in his comment on yesterday’s post. When we look at today’s world, we indeed see many examples of people “tuning out” what they don’t want to know about.

I think the apparent contradiction between our need to connect with others through information and our need to block information from others can be addressed by considering tribalism.

Tribalism is certainly older than our species itself. After all, we see a very similar behavior pattern in our fellow great apes, which strongly suggests it was inherited from our common evolutionary ancestors.

As we all know, there are downsides to grouping people by tribe. Yet given that it has survived intact for so long, I suspect tribalism is one of those instincts, like love for one’s own progeny and a tendency to believe in supernatural gods, that has, one way or another, been helpful in preventing our species from dying out.

Which means that tribalism is baked into our brains. In other words, for better or worse we are stuck with it.

We all belong to many tribes. I personally belong to various overlapping tribes defined by many factors, including professional, cultural, ethnic, familial, political, geographic, aesthetic and metaphysical.

When we wish to bond with others that we perceive as belonging to a common tribe, we draw on our super-power of human communication. But when we wish to indicate that someone is not of our tribe, we tend to narrow or even shut down the paths of communication.

I suspect that this is the likely pattern we will see in the coming age of wearables. We will use this communication technology, as we have used all communication technologies that came before, to connect with those with whom we feel a kinship. Which means we won’t be motivated to “auto-correct” communication that comes from those people.

But we will also use these new emerging tools to help us tune out those other pesky humans — the ones we perceive as belonging to a rival tribe, the ones we don’t think have anything useful to tell us.

Alas, I suspect we will apply all sorts of filters to block those people. The tragedy is that by doing so, we will create a self-fulfilling prophecy.

Auto-correcting reality

Saturday, March 10th, 2018

Demion’s comment on yesterday’s post was very thought provoking. Suppose, with our future augmented reality glasses or contact lenses, we could automatically auto-correct the world around us? Bad jokes on signs would be “promoted” to good ones, visual and architectural design in questionable taste would be replaced by something more satisfying, and so on.

I suspect we won’t actually do this, and my reasoning is by analogy with what we have chosen to do in the past. As human beings, we have exactly one unquestionable super-power: Our shared ability to learn natural languages that support generative grammar. This super-power allows us to communicate with each other in incredibly powerful and subtle ways.

Consequently, the one thing we really care about is to accurately interpret the real intention behind the words and actions of other humans. We don’t always succeed at this task (far from it). Yet it is, nonetheless, the thing we care most about getting right.

After all, if we fail to understand and properly interpret the intentions of others, we find ourselves effectively cut off from other humans, and therefore from our own greatest super-power. Which is why, I posit, we will always resist technological “assistance” that could artificially reduce our ability to accurately interpret the human world around us, as flawed as that world may be.

To give an analogous example, using technology that is already familiar: When you author a document using modern word processing software, you are given the option to turn on auto-correct. If you exercise this option, your errors in spelling are automatically fixed. Also, word processing programs generally highlight questionable or awkward grammatical constructions. You then have the option to reword what you have just written.

But what we never see is document software that shows you the writing of other people with automatic corrections applied to their errors in spelling and grammar. People don’t want to see the errors of others “fixed”. They would rather see what other people actually wrote, with all the idiosyncrasies in place.

Fundamentally, we trust other peoples’ mistakes more than we trust software that might shield us from those mistakes, because what we really want to know is what was going on in the brain of that other human being.

Since we will continue to be human beings in the decades to come, with brains wired more or less the same way they have been for tens of thousands of years, I don’t see that this will change, no matter how far augmented reality technology may develop.