VPU

Around twenty years ago, a new phrase entered the computer-tech culture: “Graphics Processing Unit”, or GPU. The general currency of this term started when the high end computer graphics of th ’90s (Silicon Graphics in particular) had officially become a dinosaur, and so was duly shoved aside by the warm furry mammals of low end commodity hardware, such as nVidia, ATI, and their competitors.

We now take hardware-accelerated 3D graphics for granted in all our information devices, including our phones. It would be unheard of, in this day and age, for a SmartPhone to not have a GPU.

As we make the transition to wearables, something similar is about to happen with computer vision. A dedicated co-processor — a Vision Processing Unit, if you will, or VPU — entirely devoted to figuring out what you are seeing when you look out into the world, will soon be part of every consumer-level information device.

As soon as we put on our cyber-glasses, we will able to tell where we are and who we are looking at. Inferenced based on real-time object recognition of doors, bottles, furniture, cars, or whatever, will be taken for granted by a young generation that will never have known anything else.

Sixteen years ago, the new Millenium brought with it the age of the GPU: an era of affordable consumer-level high performance 3D computer graphics that is now in every phone and laptop computer. We are now about to enter the age of the VPU. Whatever we look at, our wearable device will recognize it, and will help us figure out what to do about it.

In not so many years, a generation will come of age that will think of this not as something magical, but simply as reality.

2 thoughts on “VPU”

  1. Considering that most work in machine vision is based on neural networking, I have a feeling that the VPU would be useful for general-purpose neural networking workloads as well. An equivalent to x86 or CUDA for neural network processing would be huge for reasons way beyond machine vision. Commodification of machine intelligence means we start creating infrastructure that can be used to eventually run uploaded human minds.

  2. I recall Ivan Sutherland’s paper (“On the Design of Display Processors”) where he talked about the “Wheel of Reincarnation”. The gist was the special features of separate display processors eventually got absorbed into the main CPU design, opening the way for newer display processors.

    This seems to have fizzled out though. While the CPU & GPU may reside on the same piece of silicon (typical in a phone’s “application processor”), for the past decade or so they’ve maintained very distinct architectures.

    After half a century, Sutherland’s “wheel” has finally ground to a stop.

Leave a Reply

Your email address will not be published. Required fields are marked *