Noise revisited, part 6

As long as I am using my new implementation of 3D noise to build something as lofty as clouds, I thought another good test would be to try the opposite. And the opposite of the heavens is the earth. So today I made a rock.

The rock is built by creating a sphere, and varying the radius of the sphere via a fractal sum of noise. That is, I displace the surface of the sphere by adding together noise at many different scales.

This is pretty much what you see when you look at mountains or coastlines. Something that would otherwise be straight has been perturbed by random forces at many different scales. And 3D noise is really good for simulating that.

For good measure, I added a subtle mottling to the surface color, to suggest that the rock is a composite of various minerals. You can see the result below.

Noise revisited, part 5

One thing that makes me happy about modern GPUs is just how darned fast they are. I find that I can create procedural textures with multiple calls to my noise function at every pixel, and the GPU keeps up just fine.

For example, click on the image below to see a live simulation of roiling clouds. When you follow the link, you can also see the complete vertex and fragment shader code that generated the simulation.

This cloudy skies simulation was created by calling 3D noise multiple times at different scales. I find that I can call noise up to 8 times per pixel and still maintain a steady frame rate of 60 frames per second on my phone.

Let’s work out some numbers. The simulation itself is 1000×1000, so that’s one million pixels.

My noise implementation calls the cosine function 4 times to compute a repeatable random number. So to evaluate noise once, that’s 4 cosine functions for each of the x,y,z axes at the 8 corners of a cube.

That’s 4x3x8 = 96 cosine computations. Multiply that by 8 calls to the noise function per pixel, then again by one million pixels, and then again by 60 frames per second.

The result? More than 46 billion calls to the cosine function every second — not counting all the other computations involved. And it all runs just fine on the low power GPU of a smartphone.

Pretty cool, yes?

Noise revisited, part 4

After yesterday’s big winter storm, this morning I awoke to find the windows covered in ice. I took some photos.

Below left is a photo of one of the windows here. To its right is a close-up photo of the same window.

The reason for this texture is that ice has lower density than water. That is why ice floats.

The colder water toward the outside of the window freezes first, and therefore expands faster. This makes the outside surface area increase, causing the surface to buckle.

The result is a random bumpy topography. Because of the underlying physics, all of the bumps, although randomly placed, are about the same size.

It is similar to the process — a gradual increase in surface area over time — which causes drying paint to develop a bumpy surface, with all of the bumps being about the same size. And this particular natural phenomenon was also my original inspiration for creating synthetic band-limited noise, all those years ago.

For the last several days I have been writing posts about noise. So it seemed like a wonderful coincidence, when I woke up this morning, to see something like this.

It was as though nature itself was sending me noise.

Or maybe it was sending me a signal.

Noise revisited, part 3

One of the first things I like to do when I implement any procedural texture is slap it onto a sphere, and then rotate the sphere. That’s a pretty surefire way to find out whether there are any unwanted patterns, repetitions, or other objectionable visual artifacts.

So I did just that with my latest implementation of 3D noise. Fortunately, it looks as though this particular implementation passes the rotating sphere test.

To link to the result and see for yourself, you can click on the image below.

More tomorrow.

Noise revisited, part 2

The language, architecture and even general philosophy of a CPU and a GPU are radically different, so it is intriguing to see them performing the same task. Here, two wildly different kinds of computers are producing exactly the same result.

To anyone looking at the two images below, they seem to be the same. But a programmer reading the corresponding code will see two clearly distinct programs.

In one case, the program is optimizing for the massive parallel pipeline capability of a GPU. In the other case, the program relies on the flexibility of programming in Javascript on a CPU. For example, on the CPU there is no built in normalize function to set a vector length to one. But it takes just a single line of Javascript code to derive such a function from a dot product:

let normalize=v=>(s=>v.map(a=>a/s))(Math.sqrt(dot(v,v)));

From the perspective of a computer graphics designer, having the same noise function on both the CPU and GPU means that the designer is free to go back and forth between the two environments when creating noise-based models. You can choose to implement one part of your design to run on the CPU and another part to run on the GPU, and the results will match.

More tomorrow.

Noise revisited, part 1

For years I have been wanting to implement a version of my 3D noise function that produces the same result on the CPU and GPU. But I also wanted it to do other things as well:

(1) It needs to use only standard WebGL and Javascript
(2) It can’t require loading any data tables
(3) It must run insanely fast — even on my phone

Well, I finally got around to it. I’ve posted a little side by side test of the GPU and CPU implementations, together with the source code for each.

The test shows an X,Y slice of 3D noise traveling in Z over time. You can see it here.

What are the odds?

The comments on yesterday’s post remind me of an experience I had some years ago. I was at a conference in Germany, and one of my colleagues took me to where Carl Friedrich Gauss had once lived.

My colleague and I had both heard the story of how the world first learned that Gauss was a mathematical genius. When he was a child in school, his teacher gave the class an exercise to keep them busy.

In those days, every student had a slate and a piece of chalk to write with. The teacher told them to add up the numbers from one to one hundred.

While all the other children toiled away at the task, young Gauss simply wrote a number down on his slate and then set it down. The teacher asked him why he was just sitting there.

Gauss showed his teacher the slate, which had the proper sum on it. The boy had figured out the formula in his head, had solved it, and had written down the proper answer.

We then discussed whether that was actually a true story. “What are the odds”, my colleague asked, “that the story is real?”

Never in my life have I been fed a better straight line. “I would say — fifty fifty.”

The first 500 digits of pi

When I was in high school there was a kid whose hobby was memorizing the digits of pi. He wasn’t particularly into math — in fact he was a very talented and dedicated musician. But he just enjoyed memorizing digits of pi.

None of this really mattered until one day when our math teacher asked whether anybody knew the first digits of pi. This kid piped up “I know the first 500 digits.”

The teacher, knowing that this kid was an indifferent math student, must have thought this was just an attempt to disrupt the class. What happened next was awesome.

The teacher handed the kid a piece of chalk, and said “write the first five hundred digits of pi on the blackboard.” Meanwhile, the teacher picked up his trusty Chemical Rubber Company book of math tables, while the kid proceeded to fill the board with numbers.

When he was done, the student put down the chalk and went back to his seat. The teacher, looking back and forth between the digits on the board and the book in his hands, slowly realized that this was the real deal.

For those of us in the class, seeing the board filled with the first 500 digits of pi was wonderful. But seeing the look on the teacher’s face was even better.

Reincarnation, revisited

Reading over yesterday’s post, I had a sense of deja vu. And then I realized why: I was not talking about something in the future, but rather about something in the past.

When I developed the first general purpose procedural shader language, back in early 1984, there were no GPUs. Which means that I was operating with fewer constraints.

My key innovation was to run a fully featured custom designed computer program at every pixel. Because I was operating on a general purpose CPU, I was able to design my shader language any way I wanted.

One feature that I put in was to have vectors and matrices as primitive data types, and operations between them as native operations. This was, after all, a programming language for graphics.

But another feature that I included was a Lisp-like semantics. I could define lightweight functions at any time, and then use them as a kind of shorthand.

So my shader language ended up being, like Lisp or APL, a sort of meta-language. As I added new operators, it got progressively easier to use the language to define and experiment with procedural lighting and textures.

This is not the way today’s GPU languages work. They tend to be very rigid in their structure and data typing, because they are trying to make it easier for the compiler to optimize the code you write.

But I suspect that as GPU compilers continue to evolve, that old wheel of reincarnation will continue to spin, and people will be able to program for the GPU with the same flexibility that I was enjoying over 40 years ago in the world’s very first shader language.