The Edge, part 1

One of the terms thrown around a lot these days in computing circles is “edge computing”. You experience edge computing every time you talk into your SmartPhone and Google converts what you’ve just said into text.

In that case, the audio of your voice streams to a Google server, where an extremely powerful computer uses complex algorithms to convert that audio into meaningful written sentences. The interesting part of this is that the level of processing done on that server is far greater than anything your phone could do on its own.

Essentially, the computer in your phone is acting as a gateway to a vastly more powerful computing network. Because your phone is on the “edge” of that powerful network, in short bursts you can get access to far more computational power than would be possible using just that little box in your pocket.

As edge computing advances in the next few years, the experience of reality itself will be fundamentally altered for many millions of people. More tomorrow.

3 thoughts on “The Edge, part 1”

  1. This is the first time I’ve heard the term “edge computing.” I would expect it to be an antonym of “cloud computing,” but the scenario you used to explain “edge computing” is also a perfectly cromulent example of “cloud computing”: The computation happens in the cloud (data center) on behalf of the a device at the edge of the network. So I guess the terms are synonyms? Strange.

    If edge computing is computing in a data center, what do you call the computing that actually happens in the device at the edge?

  2. Ok, I have heard the term”edge computing” but the context I heard about it was exactly as Adrian suggested, computing at the source (object recognition in a remote security camera rather than sending the data to a server).

    I was just too nervous to mention it until I saw Adrian’s comment.

  3. Ah, sorry. I jumped to step two without sufficiently explaining step one — the step whereby even as the phone is communicating with the network, it is also doing the best it can locally, with the extremely useful but limited processing capability it can provide on the edge of the network.

    If your phone were just a microphone and speaker and wireless transmitter/receiver, the scenario I described would not be edge computing. What makes it edge computing is that your phone is a computer, one that works in a way that is complementary to the far more powerful (but higher latency) computational network to which it is attached.

    When you first speak into your SmartPhone, it performs local processing to figure out how to immediately convert your phonemes into text. But it’s not very good at doing that, because of the limitations imposed by running such a process locally, entirely on your phone.

    So while it is doing that, it is also sending your input to the Cloud, which then improves upon the initial guess that your phone came up with on its own. That’s why when you are dictating into your phone, you will initially see some bad choices, but then some seconds later you will see those choices replaced by better ones: The initial choices were made by a process running locally on your phone, and the later better ones were updates from the Cloud.

    The full potential of edge computing is reached neither by an entirely local nor by an entirely Cloud-based approach, but rather by a dance between the two. Immediate decisions are made at the edge of the network (eg: on your SmartPhone), and then those decisions are subsequently verified, updated, integrated or otherwise improved upon by the connection to the Cloud.

Leave a Reply

Your email address will not be published. Required fields are marked *