Noise revisited, part 3

One of the first things I like to do when I implement any procedural texture is slap it onto a sphere, and then rotate the sphere. That’s a pretty surefire way to find out whether there are any unwanted patterns, repetitions, or other objectionable visual artifacts.

So I did just that with my latest implementation of 3D noise. Fortunately, it looks as though this particular implementation passes the rotating sphere test.

To link to the result and see for yourself, you can click on the image below.

More tomorrow.

Noise revisited, part 2

The language, architecture and even general philosophy of a CPU and a GPU are radically different, so it is intriguing to see them performing the same task. Here, two wildly different kinds of computers are producing exactly the same result.

To anyone looking at the two images below, they seem to be the same. But a programmer reading the corresponding code will see two clearly distinct programs.

In one case, the program is optimizing for the massive parallel pipeline capability of a GPU. In the other case, the program relies on the flexibility of programming in Javascript on a CPU. For example, on the CPU there is no built in normalize function to set a vector length to one. But it takes just a single line of Javascript code to derive such a function from a dot product:

let normalize=v=>(s=>v.map(a=>a/s))(Math.sqrt(dot(v,v)));

From the perspective of a computer graphics designer, having the same noise function on both the CPU and GPU means that the designer is free to go back and forth between the two environments when creating noise-based models. You can choose to implement one part of your design to run on the CPU and another part to run on the GPU, and the results will match.

More tomorrow.

Noise revisited, part 1

For years I have been wanting to implement a version of my 3D noise function that produces the same result on the CPU and GPU. But I also wanted it to do other things as well:

(1) It needs to use only standard WebGL and Javascript
(2) It can’t require loading any data tables
(3) It must run insanely fast — even on my phone

Well, I finally got around to it. I’ve posted a little side by side test of the GPU and CPU implementations, together with the source code for each.

The test shows an X,Y slice of 3D noise traveling in Z over time. You can see it here.

What are the odds?

The comments on yesterday’s post remind me of an experience I had some years ago. I was at a conference in Germany, and one of my colleagues took me to where Carl Friedrich Gauss had once lived.

My colleague and I had both heard the story of how the world first learned that Gauss was a mathematical genius. When he was a child in school, his teacher gave the class an exercise to keep them busy.

In those days, every student had a slate and a piece of chalk to write with. The teacher told them to add up the numbers from one to one hundred.

While all the other children toiled away at the task, young Gauss simply wrote a number down on his slate and then set it down. The teacher asked him why he was just sitting there.

Gauss showed his teacher the slate, which had the proper sum on it. The boy had figured out the formula in his head, had solved it, and had written down the proper answer.

We then discussed whether that was actually a true story. “What are the odds”, my colleague asked, “that the story is real?”

Never in my life have I been fed a better straight line. “I would say — fifty fifty.”

The first 500 digits of pi

When I was in high school there was a kid whose hobby was memorizing the digits of pi. He wasn’t particularly into math — in fact he was a very talented and dedicated musician. But he just enjoyed memorizing digits of pi.

None of this really mattered until one day when our math teacher asked whether anybody knew the first digits of pi. This kid piped up “I know the first 500 digits.”

The teacher, knowing that this kid was an indifferent math student, must have thought this was just an attempt to disrupt the class. What happened next was awesome.

The teacher handed the kid a piece of chalk, and said “write the first five hundred digits of pi on the blackboard.” Meanwhile, the teacher picked up his trusty Chemical Rubber Company book of math tables, while the kid proceeded to fill the board with numbers.

When he was done, the student put down the chalk and went back to his seat. The teacher, looking back and forth between the digits on the board and the book in his hands, slowly realized that this was the real deal.

For those of us in the class, seeing the board filled with the first 500 digits of pi was wonderful. But seeing the look on the teacher’s face was even better.

Reincarnation, revisited

Reading over yesterday’s post, I had a sense of deja vu. And then I realized why: I was not talking about something in the future, but rather about something in the past.

When I developed the first general purpose procedural shader language, back in early 1984, there were no GPUs. Which means that I was operating with fewer constraints.

My key innovation was to run a fully featured custom designed computer program at every pixel. Because I was operating on a general purpose CPU, I was able to design my shader language any way I wanted.

One feature that I put in was to have vectors and matrices as primitive data types, and operations between them as native operations. This was, after all, a programming language for graphics.

But another feature that I included was a Lisp-like semantics. I could define lightweight functions at any time, and then use them as a kind of shorthand.

So my shader language ended up being, like Lisp or APL, a sort of meta-language. As I added new operators, it got progressively easier to use the language to define and experiment with procedural lighting and textures.

This is not the way today’s GPU languages work. They tend to be very rigid in their structure and data typing, because they are trying to make it easier for the compiler to optimize the code you write.

But I suspect that as GPU compilers continue to evolve, that old wheel of reincarnation will continue to spin, and people will be able to program for the GPU with the same flexibility that I was enjoying over 40 years ago in the world’s very first shader language.

One foot in each world

This week I decided to reimplement some algorithms I had originally written many years ago at the beginning of my career. It was interesting to see what was the same and what was different.

The algorithms themselves were the same. But the experience of implementing them was surprisingly different.

Today programming languages are very different than they were when I was first starting out. For one thing, unlike when I first started programming, you now have the choice of implementing on the CPU or the GPU.

So I decided to implement the same algorithms for both CPU and the GPU, just to compare and contrast. It was fun to explore the complementary superpowers that you get within those two different programming worlds.

GPUs are all about raw power. For example, they make it super easy for you to manipulate vectors and matrices. In contrast, CPU programming is all about flexibility. For example, you can create and run a new function right in the middle of doing something else.

I suppose that one day those complementary superpowers will be gathered into a single programming environment, and then we will have it all. But until then it’s fun to keep one foot in each world.

Frozen in amber

I have fond memories of actors and actresses who were in TV shows when I was a child. And of course I have a natural tendency to ask “Whatever happened to…”

But I am starting to realize that somebody I fondly remembered from my childhood TV watching is probably not around anymore. On a rational level this is an obvious point, but on an emotional level it feels very strange.

My memory of these people is frozen in amber. I remember them as being exactly as they were when I was a kid. So there is something unnerving in the thought that the people I remember from that time are, at the very least, very changed — and in many cases are no longer with us.

I guess that is part of the magic and mystery of television. In real life people grow old and eventually pass on. But on TV, you remain young forever.

Superfan

I just this week discovered the Superfan version of The Office — the version where they include all of the improvised scenes that were originally edited out.

It is even more cringe-worthy. And yet it is so much better.

When all of the scenes of those characters at their most utterly embarrassing and unforgivable are included, the characters gain a level of unexpected humanity. It seems ironic, yet so it is.

I suspect there is some deep lesson here.