Featured

Announcing my Red Bubble Store!

You’ll find it here. Every now and then I’ll get a little mentally tired with whichever piece I’m working on, and I’ll just doodle for a bit. Every now and then, now and then, I’ll come up with something brilliant while doing it.

Time to turn them into t-shirts and mugs!

So I’ve got two items so far and probably a few more coming through the rest of the week—the one featured to the right of this paragraph is a Christmas tree made out of fractal constructs, which admittedly is the kind of thing you either get or you don’t… which also makes it funnier.

And to the left of this paragraph, we have my take on Spider-Man: No Way Home. (Also a play on “Magneto Was Right” / “Thanos Was Right” / “Mysterio Was Right”) I’ll give it a seven out of ten, it was a darned good movie; there were only a few things that bugged me and if we’re going to be realistic, they specifically bugged me, so I can’t complain. My wife and I had a lot of fun and I’m looking forward to Morbius later in the coming year.

Give it a look now and then!

Brownian Noise

When we attempt to model a physical phenomenon for entertainment (or simulation) purposes, it’s less a question of perfectly fitting the real-world process that creates it, and more a question of getting close enough for the communication we intend. This is certainly true for Brownian motion.

For those who aren’t familiar, Brownian motion is the wobbly oscillation we tend to see in falling snow, or dust motes floating through the air. They repel each other slightly, and carry a form of influence throughout the entire medium; but otherwise act independently. Other great examples include cloud diffusion, or sediment flow.

Obviously this is all over the place in modern games, but if we were to simulate it exactly, we would have serious issues. We would need to keep track of the properties of a crippling number of particles, along with their influence on neighboring particles. This is possible in theory, but not readily in practice.

So instead, we take on what’s called a Lorentzian model (as opposed to Newtonian, which would be keeping track of each individual particle), and just consider the net flow!

I’m going to go over some modern C code for what’s called Fractional (or Fractal) Brownian Motion, fBm for short, here. Then, I’m going to explain it, bit by bit. The only thing I’m going to leave in the abstract is the noise function, which can be any random number generator of your choice; an LCG, /dev/random, whatever you would like.

Every modern language I know of seems to have at least one LCG random function built into its libraries, so this shouldn’t be a problem for you; just understand that noise(p) takes the point p as a parameter and uses it to generate a difficult-to-predict value.

(If you must generate one, and all you have is an LCG expecting a scalar number, I suggest simply multiplying each dimension by some factor, running it through the same LCG with its value as the seed, and popping the result into a return vector. It doesn’t have to be perfect, just convincing.)

We’ll need a handful of parameters to get this to work in a customizable fashion. Aside from the point we’re trying to generate this noise at, we’ll also want the fractal increment, the lacunarity, and the number of noise octaves. That probably threw many readers for a loop, as it isn’t at all obvious what those mean; so let’s discuss them in turn.

Octaves

Fractal Brownian motion is technically not one layer of noise, but several, summed, and usually normalized (which just means spanned from zero to one). Each component noise is a wave of a specific frequency.

If all of the waves were of identical frequency, we would lose the character of the fBm, and just have what equated to a scaled noise pattern. So, with each component wave, we need to increase the frequency by a certain amount.

“Octave” is a term most are more familiar with musically—the number after a letter note, like C6, or D7, is its relative octave on a piano. The catch is, a difference of one octave is a doubling of frequency—as middle C is 262 Hz, the C one octave above it (C5) is 524 Hz. The same principle applies for our octaves in noise.

(In fact, you actually can tie this to something like an LV2 and create some weird, by which I mean awesome, sound effects; but that’s much harder to demonstrate on a web page, so we’ll be sticking with graphics for now!)

The more octaves you add to your noise function, the more detailed and winding the result is. But, this brings us to our next point. We could just assume that each layer of noise is an octave higher in its sampling, but that does feel a little limiting. What if it was half an octave? Or two octaves? This has a noticeable affect on the form of the noise, so we’ll want it to be adjustable. This is the…

Lacunarity

So, if we wanted to go purely by octave, we would simply use a frequency equal to 2x, where x is some integer. We might have a few constants and factors in there to muddy things up to our liking, but this would be the core of it. While there are plenty of arguments for simplicity’s sake for sticking with 2x styled values, when we aren’t necessarily going for simplicity, we aren’t beholden to it at all.

I’m going to mathematically refer to that two number as L. It’s our lacunarity, and it doesn’t have to be two. Lacunarity is an interesting term for it, as it’s actually derived from the Latin lacuna which means “lake”. As lacunarity increases, the size of the gaps in our fractal increase in turn, so it’s less mathematically defined as ontogenetically defined—that is, how it looks, not what it is.

(In fact, lacunarity actually has a very context-dependent definition; in the study of fractals, fractals are often classed by a lacunarity property which can be geometrically measured, with something like the box counting algorithm. This is not necessarily always related to our use of it here.)

The last element to look at, given our understanding of the need for multiple octaves, lacunarity, and a noise generator, is what’s going on in x. There’s nothing saying that it has to be an integer, or even a typical flat value. This ends up being critical for fBm, too. We refer to x with the…

Fractal Increment

Or, for short, H. Generally, in our octave multiplier Lx, x=-H × octave number. So, where i is an integer in the range of zero to the number of octaves, each noise value is multiplied by an amplifier of L-Hi.

(I believe the choice for H for the fractal increment relates to its similarity to the fractal Hausdorff dimension, but can’t be certain of this at the moment. We’ll use it out of tradition.)

The rationale behind the negative is to avoid the typically inconvenient issue of a positive power, as these don’t converge to zero at infinity. I won’t lie, sometimes that’s actually what we want; but usually we don’t and it leads to very strange behaviors. Feel free to set your increment to a negative number, giving us a positive exponent, if you want to see the result!

This gives us quite a few elements which we can slide over for the sake of controlling our Brownian noise. Let’s look at our final math for it.

Final Mathematics

So, we’re iterating up to k octaves. For each octave, we will be adjusting our sample point, conveniently by multiplying it by the lacunarity. We’ll acquire noise from that sample point, and then multiply it by a factor of L-iH, where i is our current octave number. That is, we’ll be doing this:

Don’t worry if that looks scary, we’ll go over it in basic procedural code. It isn’t so bad.

We’ll start with something easy to understand, but not quite perfect, and work from there.

#include <math.h>

double fBm( Vector point, double H, double lacunarity, double octaves )
{
    double value = 0.0;
    
    for (int i = 0; i < octaves; i++) {
        value += Noise( point ) * pow( lacunarity, -H*i );
        point *= lacunarity;
    }
    
    return value;
}

(Again, remember that I’ve not defined what Noise is here; that’s just a function that takes a vector value and returns a random float. There are oodles to choose from, so pick what’s most convenient to you, in your environment. In fact, it could arguably be any function period, but the squirrellier the results, the better.)

Let’s discuss it in detail. We take a basic Vector structure called point, and iterate over octaves with it. For each iteration, we add noise, seeded with our current point, to our final value. But, first we multiply it by our lacunarity to the power of negative increment times the octave number.

That amplitude multiplier of L-iH ensures that each octave is smaller than the previous octaves in magnitude. Remember that if this were not negative, each value might be successively larger. There are uses for that, but they aren’t typical and aren’t usually useful.

Then, to take care of the variation of our sample point with each octave, we multiply it by lacunarity, so each call to noise will return a different value.

(If you would like to test the behavior of this function and don’t have a Vector class or struct handy, feel free to just use a double for it and watch the behavior of the resulting values over an iteration. You’ll just be seeing it in one dimension, but it will all still be there.)

We aren’t quite done, though. What if we have a non-integer number of octaves? Currently, the behavior of the function is floored to the integer immediately beneath the specified number of octaves. Thankfully, this is relatively simple to fix.

double fBm( Vector point, double H, double lacunarity, double octaves )
{
    double value = 0.0;
    
    int i;
    for (i=0; i<octaves; i++) {
        value += Noise( point ) * pow( lacunarity, -H*i );
        point *= lacunarity;
    }
    
    double remainder = octaves - (int)octaves;
    value += remainder * Noise( point ) * pow( lacunarity, -H*i );
    return value;
}

Let’s discuss my boldfaced additions here. I’ve declared i in the next-higher scope, because we’re going to need to refer back to it.

We get the fractional number of octaves by subtracting the integer value, which is traditionally the floor value, of octaves from it. (Some languages have a frac function that takes care of this, but this is usually what it amounts to internally.)

We then add this remainder value, multiplied by noise on our current reference point (and I remind you, this is different from the last one, as it’s already been multiplied by lacunarity), multiplied by L-Hi, to our final value.

Technically, this is it—you’ve got unbounded fractal Brownian motion. Dependent on your noise function, this could be a set of continuous vectors, a cloud of densities, or whatever you would like; the fBm part is primarily in the octaves method. However, if you’re going for something like densities, you may want to bound this value to [0, 1], so you can properly display it on a monitor.

How you choose to bound it likely depends on what you’re doing with it, but by dividing by the sum of maximum possible values, we can ensure that it’s always between zero and one.

double fBm( Vector point, double H, double lacunarity, double octaves )
{
    double value = 0.0;
    double sum = 0.0;
    
    int i;
    for (i=0; i<octaves; i++) {
        value += Noise( point ) * pow( lacunarity, -H*i );
        sum += pow(lacunarity, -H*i);
        point *= lacunarity;
    }
    
    double remainder = octaves - (int)octaves;
    value += remainder * Noise( point ) * pow( lacunarity, -H*i );
    sum += remainder * pow(lacunarity, -H*i);
    
    value /= sum;
    
    return value;
}

Of course, you may just want it centered around the median or mean, which would involve keeping track of maximums and minimums for each value, but this is easily done with another variables and a comparison operator. It really comes down to the end user!

I would like to thank the F. Kenton Musgrave for introducing me to this function on a technical level. Most of the changes to it are largely cosmetic, and the chapter is a great read in any case. Musgrave, F. Kenton, (2002) Texturing and Modeling — A Procedural Approach, Third Edition, An Introduction to Fractals, pg. 429-445

The Results (in OSL)

I’ve rewritten this function in OSL (Open Shading Language) and implemented it in Cycles, to demonstrate its results. OSL conveniently includes a noise function which accepts any form of vector or color as a parameter. The final material is thus:

#include <stdosl.h>

shader fBM(vector uv = 0, float octaves = 1, float lacunarity = 2, float H = 1, output float hue = 0)
{
    float value, remainder;
    int i;
    
    value = 0.0;
    vector p = uv;
    
    for(i = 0; i < octaves; i++) {
        //*additive* cascade
        value += noise(p) * pow(lacunarity, -H * i);
        p *= lacunarity;
    }
    
    remainder = octaves - (int)octaves;
    if(remainder) {
        value += remainder * noise(p) * pow(lacunarity, -H * i);
    }
    
    hue = value;
}

It’s likely that, if you’re familiar with C, you understand most of this too. I’ll go over the minor differences—each parameter to the shader has to have a default value, which is listed after the parameter with an equals sign and then its respective value. The output parameter just lists an output field from our shader, which is later plugged into an HSV node to produce a color.

I haven’t bothered to normalize it in script, as hue is circular anyway and will normalize itself. (Processor cycles are precious, after all!)

If we set UV to the object coordinate, octaves to nine, lacunarity to a basic two, and H to 0.5, we get this:

fBm demo’d over a hue space

Drop our lacunarity to one, and we have this:

A simple drop of lacunarity can dramatically change the effect!

If we drop our increment, H, to 0.1, we get a much noisier image.

Dropping our increment increases the frequency of change dramatically.

To demonstrate why our increment is multiplied by a negative, let’s drop it to a negative value itself to cancel it out. If we drop from 0.5 to -0.5, we end up with unbounded values, which while it sounds cool ultimately just gives us grey noise of infinite frequency.

Believe it or not, this is extremely fine color noise from a negative increment.

I promise that there are uses for this kind of behavior, but as you might guess, there are precious few.

Lastly, dropping the number of octaves simply removes fine detail from the fBm, and gives us a much smoother cloud that follows the same pattern.

Fewer octaves intuitively results in more regularity.

Hopefully, you have an idea how fractal Brownian motion actually works now. You can alter its input parameters to skew it in different ways, or apply it to any number of values. If we wanted a normalized gradient field, as an example, we could simply pipe our “hue” into the polar coordinates of the vector. It’s applicable in any number of dimensions—which is part of why I’m not picking what noise is for you—and can provide a convincing emulation of any number of real-world fields.

The Myth of Noise

Let’s step back for a moment.

There’s an ongoing problem among engineers today, and it’s an understandable problem. We like to think that our actions are always based on the foresight of science, but the truth is, operationally, we’re very independent of the empirical method. For us, it just has to look like it works; the semantics of how, while legitimately interesting, are not bringing our paychecks any closer.

This is… acceptable, though flawed. Engineers are much closer to artists than scientific researchers; we create. We love science, but we don’t always have all the answers, and a lot of the time, neither does everyone else. What’s worse, sometimes having scientific correctness is prohibitively expensive compared to having a passable result.

Not that scientific programming can’t be a thrill in itself…

So, that brings me to “noise”. There is a persistent myth among software engineers that noise is truly random, or if not that it is truly random, the notion that it can be truly random. Both are amateurish bull-pucky. Noise is extremely comprehensible, and I dare say it even feels like a cheat.

Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.

John Von Neumann

Computers are entirely deterministic by design. Any function can be broken down into a collection of pure functions, for which the same input will always result in the same output.

We do have <random.h> and any number of other psuedorandom—and that psuedo– is the important part—number generators, though. So how do they work? There are a lot of ways that they can work, but I’m going to start with the most common, the linear congruential generator, or LCG.

LCGs require only a seed value, which, in entertainment, is typically the time. They have an internal state, after receiving that seed value, every value returned by them is a new and reasonably difficult to predict—at least for entertainment purposes—number. How they operate internally is painfully (and arguably beautifully) simple.

You see, every single LCG can be broken down to the same formula:

X_{n+1}=(a X_n + c) mod m
The LCG Formula

I get that this may look complicated to some, but it’s blissfully not. Borrowing LaTeX notation, which should be clear enough to most, X_{n+1} is our returned value, that’s the new random number X_n, on the other hand, is the last random number generated. That is, it is the internal state.

It might also be the seed value.

The remaining important factors are a, c, and m, namely the multiplier, increment, and modulus. For some, that mod function might look a little weird, but it’s just the modulus function, which is very similar to a remainder. (There are a few minor differences involving negative numbers, but I’m not going to bore you with the details here. Maybe in another blog post someday.)

To clarify, if A is three and B is two, then A mod B would be one— because three divided by two is one, remainder one. That’s almost all that modulus is, which is thankfully very easy for machines to do.

If we take this, and set our multiplier a to 214,013, and increment c to 2,531,011, and lastly our modulus m to 2³², then we’ve got the equivalent of Visual C++’s random number generator. There are plenty of alternatives to this, too. Most of them are even listed on Wikipedia—this knowledge isn’t as arcane as people like to think.

Incidentally, this is a huge problem for cryptography, as time itself is not random. It’s quite predictable. So, you wouldn’t want a system limited to an LCG to encode, say, bank account information, if it’s seeded with the time. Thankfully this isn’t a problem for entertainment purposes, as it’s still virtually impossible to take a small set of results and back-calculate the formula that produced them.

Additionally, it is very easy to pick a bad set of factors for an LCG. There’s a lot of wonderful math behind them, which for space constraints I’m not going to go into here; but any time you’re looking at a cheap algorithm for generating white noise, you’re likely to be using an LCG.

Feel free to implement one in Python right now—it isn’t any harder than it looks.

For companies and devices that need “true” random numbers, it’s typical for an external unpredictable phenomenon to be used. I’m hesitant to use the word “random” here, as a guy who got his beginning in physics, as our universe, complicated as it may be, is still deterministic.

To illustrate this, the company Cloudflare famously uses a video of a wall of lava lamps to seed its encryption, as they’re much more difficult to predict. For *nix-based computers (really anything using Udev), there’s typically a cache of real “random” data pulled from system temperature, keystroke frequency, and any number of other elements outside of system control, which can be used to generate a cryptographically secure value.

While these elements are virtually impossible to determine from the outside, they are the result of physical (albeit often chaotic) processes, with their own impulses and mechanisms controlling them. (Contrary to popular belief, even quantum mechanics is not truly “random”; it simply fragments perception of values into unknowable facets; there are still rules.)

So what do we really mean by “random”? Most commonly, we’re looking for data which is difficult to predict. A good LCG does this. For cases where LCGs aren’t enough, there are a number of derivatives, including inversive congruential generators (where the multiplier is divided by the previous result), and permuted contruential generators (which multiply the result by a curve which makes them better-behaved statistically), among others.

Like so many things, it ultimately comes down to what you’re trying to do with it, and what will be passable for your purposes. You are unlikely to ever experience true noise, only mystery!

Tiamat Preview (Arguably NSFW)

Tiamat, Take One. Imagine her seventeen feet tall.

Allow me to start by saying that if this image is not safe for your workplace, your job sucks.

That out of the way, I’ve been working on an Enuma-Elish-Accurate-ish depiction of Tiamat. I know Tiamat is typically rendered as a dragon, but nothing ever said that she was one. She had “children” that were dragons, but she had children which were a lot of different things, and this is ancient Sumerian to begin with so God only knows how right we are about its meaning. We understand, like, one percent of that language.

But hey, far be it from me to interfere with a steady-going game of AD&D on a technicality.

And yes, she’s got eight breasts. She was described as having an “udder”. I could not get that to work with any kind of class, so I Total Recalled this thing. In science fiction, people always appreciate supernumerary breasts; but no one ever wants to admit it. This also counts against the dragon theory, as, while she was enormous by all depictions, you cannot milk a lizard or an amphibian. I think we can put that theory to rest.

I did her husband, Absu, some time ago. The theory was that they were the ancient gods of fresh water (Absu) and salt water (Tiamat), who created life between them, and kind of accidentally ended up with humans too, whom they might not hold that against, but ultimately did not care for. It was a bit of a problem after a while… the challenge was to create something scary and weird, but also sexy and cool.

It couldn’t be scary as in “deformed freak”, but rather, the kind of thing that makes us feel like we’re the deformed freaks. I have a lot of work left to do—I like how her outfit is coming along, but it needs more detail. The Lovecraft Gold/Orichalcum body jewelry has a nice feel to it as well.

I also need to refit basically all of the major shape keys, correct the armature for animation (and add anything at all, along with root bone sensitivity, to the tail), so it’s far from done. I’ll probably touch it all up next weekend; I need to get back to Realtorgeist. When it’s done, if I can see a decent pose for it, I might turn it into new knick-knack art.

Hope everybody else out there had a wonderful and productive weekend. Peace.

New Trinket Design

“Regretris”

So today I was grinding at the same algorithm I’ve been busting at for a week, one which worked on paper but continues to demonstrate problematic behavior when implemented, and I finally had to take a breath.

Let me say, when you’re building an algorithm yourself, but cannot manage to keep track of what your code is doing or understand why it’s behaving poorly, it means two things. The first is that you may need to take a hike—literally go for a run and get some exercise—and then come back and look at it, from the beginning, with fresh eyes. There’s a good chance the problem is fundamental, and knowing this will help you get around it by applying different business logic to it.

The second thing to note is that you’re probably experiencing fatigue. I definitely am in the midst of some serious programmer’s fatigue. This is a very real thing, and if neglected (or band-aid medicated with something like excessive caffeine) it can turn into burnout, which can take years to recover from.

So, I’m breaking for the weekend and working on some artwork. Above is an idea that occurred to me on that very walk. You can get it on just about whatever you could possibly like, any kind of gadget or doohickey people personalize, right now, at Red Bubble. Do me a solid and buy one!

On my plate right now, I have two comics, a game which has gotten outstanding reviews in its pre-alpha, and a short film (arguably two) to work on, and the future is looking bright for all of them. I refuse to burn out now and am switching it up for a bit.

This weekend’s challenge is to render my vision of Tiamat, who was never precisely described in the Babylonian Enuma Elish. This is likely because of just how alien she was to our concepts. I know people usually think she was some kind of dragon, as some of her “children” were, but one, her children were a lot of different things; and two, she is plainly described as having an udder. You can’t milk a lizard. So, I’m getting creative with this one.

It may make a decent design in itself. We’ll see what happens. Hopefully in a couple of weeks I’ll have something final to put up here.

ArtStation Plus

The Frontier Medicine Entertainment Art Station Page

So, as in love as I am with their videos and lectures, I’ve decided to upgrade my account on ArtStation.com. I will, with no disrespect to my current setup, admit that depending on how easy it is to edit that blog, I may ultimately move off of WordPress–which has, otherwise, been great.

This should allow me to not only continue to improve upon my skill set, it should also promote some of my work automatically; which is never a bad thing in my business. Whether it ultimately subsumes my WordPress stuff, it’s hard to say just yet. I may keep this page, for some time, as a non-premium item.

In other news, I have been working for some time on a demonstration of how to model clothing, in Blender, without using the cloth modifier. I’ve known for some time that, as magical as the cloth modifier seems to be, there isn’t much that it does outside of animation that can’t be done on its own; and I’ve noticed it tends to backfire rather dramatically. Think of it less as a lecture, and more as a lab.

I should be finishing everything up through the shirt today; with boots, gloves, hair, and basic animation coming later. (I seem to have botched the audio recording on one section, so that remains a task to fix; but soon enough it will all be up.) This may be condensed into a basic, and much shorter, tutorial later.

Absu, Babylonian God of Fresh Waters

Here we have last summer’s rendering of the occupied throne of Absu, one of the antagonists of the Enuma Elish (“When on High”), the Babylonian creation myth. I’ve always liked that one, but it’s safe to say I’m getting a little obsessed with the material at this point.

The interesting thing about that myth, aside from it being the oldest creation story we can date, is that they never assumed that they were first on the world. It was a world of monsters beforehand, and only after Marduk’s insurrection was it really a world of man. Absu (fresh water) and Tiamat (salt water) were creator gods, sure; but they honestly couldn’t care less about man.

As I read this stuff, I have all kinds of wonderful ideas. I think it’s time I brought it back as, at minimum, a graphic novel; if I can get a rendering farm and some quality models together, along with some voice actors, maybe even an animated series.

I’m going to return to Absu later, with a fresh sculpture of him. It was all done by hand, but there are still a few things bothering me—and yeah, I know, probably only me—quite a bit. There’s also reasonable composition to the scene but a total lack of video compositing, which would really fill it out.

I’ve also got some great ideas for Marduk, the hands-in-the-dirt protagonist and future God of Magic, and Ishtar, which should redefine sexy on the ground level. They’re going to take time too, though. This story is really drawing me in.

Victoria Amazonica — The Giant Water Lily!

So this was all afternoon yesterday.

For the sake of Strings/Fracture/Symmetry/Whatever-I’m-Calling-It, I needed some jungle life references; and I bumped into the one water lily big enough to support the weight of a small child, or two, without sinking. Apparently they trap enough air under the leaves to provide serious structural support. It was something I had to work with.

The in-game render is likely to begin at only 32 samples and be much smaller; given the current GPU cost crisis (between COVID-19, the inexplicable coin mining addiction, and simply a lack of planning on the part of manufacturers and shippers—and who could blame them?) I’m rendering on a GTX 1660 at best. I’ll be honest, it is a mean little card, but this took a bit more than 12 hours to complete at 1080p with 128 samples.

I’ll likely jump to maybe 128 to 512 samples for the final, after the geometry is all figured out. I also had to determine exactly where I was willing to stop with it—I used geometry nodes for the dew on the leaf, and for the stamen on the flower; I don’t do much with hair particles anymore other than, well, hair.

The leaves actually use two materials, but I applied vertex coloring and used the red channel to bleed one into the other, so it would be less abrupt. The leaf material is entirely procedural; I created a UV map which was ordered by radius along y, so I could make a Voronoi texture stretch out along it, which closely resembles what we get for the veination in leaves. Pipe it into a vector displacement or bump node, along with subsurface color, pick some dark greens and turn up the gloss, and you’ve got something pretty convincing. For the region beneath it I simply generated a spirogram with so many spires coming off of it, and applied it to the material displacement, so we get those neat air pockets.

The only thing I chose to skip on was the spines these plants have beneath them, to deter herbivorous fish. It was late, and I needed it to be done; and there was no guarantee anyone would ultimately notice. Maybe later on. There are a few minor issues in this model, which of course I only notice after spending all night rendering it… but they’re inconsequential in the long game and relatively easy to fix.