The Warrior Poet’s Guide to Writing Plugins for GIMP 2.10

After publishing The Warrior Poet’s Guide to Python and Blender 2.80, I was at a deciding point. It’s clearly not meant to be the only element in the series, but I had a lot of options then. I could throw money into marketing (well, more of it), I could break for a while and get back to one of my in-house projects, or I could start on the next book.

I’m not a person to waste time, not even when I arguably should relax; so, I decided to get started on the next book. The theme of the series is professional, proven, and most importantly available tools, which are flexible enough to use to write one’s own plug-ins and add-ons. On top of that, these books are directed at people who are ambitious and driven enough to use the information, but may not always know where to begin.

The choice was obviously the GNU Image-Manipulation Program, or GIMP. GIMP has had a resounding impact on the VFX industry. Its only realistic competitor for use is Photoshop, which is saying something—GIMP was produced entirely by volunteer engineers for projects of their own. It has objectively been a spearhead for the open-source software movement. It’s spawned various other projects out of both its own original code, and its ideals, such as the GTK+ windowing toolkit.

Moreover, unlike Blender, you can easily extend GIMP with an assortment of different languages. It has native support for C, Perl, Scheme, and Python. There’s an awful lot going on under the hood in that software. However, with GIMP being software built by volunteering professionals, many of the tutorials are frequently out-of-date. (Documentation takes time too, after all!) I will frequently come across often-cited walkthroughs on the internet, which recommend techniques which have been deprecated for nearly a decade.

I’m actually glad that the walkthroughs are still there; people who aren’t used to actual source-code diving need a place to start. However, version 2.10.14 was released not long ago, the world has changed in GIMP, and no one talks about the new features. The existing material isn’t enough. Where are the saga-long YouTube videos on GeGL tricks? Where are the WordPress pages of colorspace filters and distortion stunts?

I’m now four chapters into writing The Warrior Poet’s Guide to Building Plug-ins for GIMP 2.10, focusing on Python and C. There are still worlds of material, and associated artistry-magic, to cover here. Moreover, it’s pivotal that the material is accessible to the broadest part of my target audience, so I may have a few appendices on subjects like Python, C, and GeGL. I may even append something on basic Python to Python and Blender 2.80.

If you’re interested in writing your own plug-ins for modern GIMP, then check back to this website over the next few weeks. I’m in full science-journalism mode and hope to have the initial draft done by the end of November. After that, there are a host of different products I could take on next, from LADSPA in Audacity, to OSL.

We’ll see what comes next.

Exporting Ultra-High-BPP Gray-Scale with FreeImage

Are you trying to encode data that corresponds to a specific point? Perhaps a height map, or an MRI or CAT scan? Even a density map?

If so, you may be considering storing it in an image or video format. Of course, if you’ve tried that, you may have bumped into the sharp limit of what our monitors can represent, and for that matter, what our eyes can realistically see.

Most images have a maximum of 8-bit-channel color depth, that is, eight bits per pixel color, or 256 possible shades each, of red, green, or blue. The reason for this is that the eye does have its limit on perceivable and noise-free differences in shade. According to the Young-Helmholtz trichromatic theory¹, Our eyes have three types of cone cells responsible for detecting color, distributed across low, medium, and high wavelengths. (Overlap is why we perceive a gradual blend between colors, with intensity-detecting rod cell sensitivity somewhere in the blue range, partially explaining relative “brightness” of different shades. Hence, the rainbow!)

Each cone has a resolution of roughly 100 shades. That means that our eyes, in theory, are capable of distinguishing between 100³ or 1,000,000 different colors. For the rare tetrachrome with a fourth cone cell type, it’s possibly closer to 100,000,000. In any case, 256 shades for red, green, and blue (corresponding to low, medium, and high wavelength color) gives us around 256³ or roughly 16,700,000 possible colors. All the same, for the trained eye, monitors do exist that can render high-bit-depth color, extending into the 12-bit-per-channel range; but beware that they are not cheap.

The drive for higher bit depth usually stops with scientists; however, when we’re talking about storing data (rather than appearance) in an image, eight bits only get us two hundred and fifty six possible values. For more extreme precision, 16-bit (or even higher) can look awfully appealing.

But enough introduction. How does one export a high-bit-per-pixel single-channel image? This is fairly easy to do, and generally cross-platform, with a toolkit called FreeImage. However, like everything else, it recognizes that most of its usage comes down to displays, and it isn’t immediately obvious how to do so.

There’s a specific (and very efficient) function in that toolkit which can set a pixel color according to a provided parameter. It’s usually easy to use. The specification is thus:

DLL_API BOOL DLL_CALLCONV FreeImage_SetPixelColor(FIBITMAP *dib, unsigned x, unsigned y, RGBQUAD *value);

Where FIBITMAP is a FreeImage bit mapping, x and y are unsigned values specifying coordinate from the bottom-left, and RGBQUAD is… RGBQUAD is, ah… uh oh.

RGBQUAD is 32-bits. It contains one byte for each primary color, and one byte reserved (where you could in theory store the alpha channel or any other special value). However, that limits us to, one, color; and two, no more than eight bits per channel. So, if we’re dealing with 16-bit gray-scale, we can’t use it; and if you try to pass in a pointer to a ushort you’re asking for trouble on compilation.

To modify monochromatic data, broadly speaking, we have to edit the scan lines directly. RGBQUAD has to be purged from the code for anything high-bit-def. There is precious little data on doing this available; but it comes down to another wonderful method included in FreeImage, specifically for cases like this one.

DLL_API BYTE *DLL_CALLCONV FreeImage_GetScanLine(FIBITMAP *dib, int scanline);

A scan line is the line drawn, classically by the CRT beam in older monitors, across the width of the screen. It carried over to image and texture terminology, where alignment matches a monitor’s in most cases. So, a scan line is the range of data corresponding to one “sweep” of the screen.

This method returns a BYTE *, which many will recognize as a pointer to a specific byte in memory. If you need a pointer to, say, a ushort (16 bits), you can pretty much just cast it in most languages. So, with a little pointer arithmetic, we can access the location of every quantized color value, regardless of what bit count it’s made of, in memory!

The code that ultimately ended up working for me, in D, was this.

for(int y = 0; y < height; y++) {
	ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
	for(int x = 0; x < width; x++) {
		ushort v = cast(ushort)(data[cast(ulong)((y * width + x)] * 0xFFFF);
		ubyte[2] bytes = nativeToLittleEndian(v);
		scanline[x * ushort.sizeof + 0] = bytes[0];
		scanline[x * ushort.sizeof + 1] = bytes[1];

For each scan line in the image, we iterate over the number of pixels in its width. (You must know this ahead of time.) We calculate our value and store it in v. (data was sent to the function in a parameter, as a double[]. It’s the data to be rendered in the image, as a list of values, of dimension height × width of the image.)

v is a ushort, so it’s already two bytes and interpreted as unsigned. Of course, a ushort can’t be dropped into a byte slot, it needs twice the space, but the equivalent byte array, available through std.batmanip.nativeToLittleEndian, will do just fine. My machine is working with little-endian format, so this is all that’s needed.

I suppose you could also do it manually with a little pointer math, if you enjoy that kind of thing or don’t have an appropriate package in your environment.

We can then marshal those bytes into their equivalent positions on the scanline array. We’re done! The image can be exported to a file.

Unfortunately, you may bump into an issue where the output is upside down. This isn’t as noticeable for noise arrays, but stands out on anything with actual dimensions. To change this, you need to fill from the bottom-up instead of the top down. This is fixable with a single line of change.

ushort v = cast(ushort)(data[cast(ulong)(((height - 1) - y) * width + x)] * 0xFFFF);

By decrementing from the maximum value of y, you can reverse the order of iteration and flip your image.

PNG and TIFF are two good formats for 16-bit-depth gray-scale images. It’s hypothetically also possible with higher bit depths, but the procedure should rough be the same, and there are too many possibilities to tangle with right now. I recently even heard something about 96-bit-depth monochrome, but that’s 2⁹⁶ or ~8 × 10²⁸ values and I’m not sure we can even get that kind of precision on most measurements. (Even Mössbauer spectroscopy only goes to change detection of a few parts per 10¹¹, and last I checked, that’s the finest measurement any scientist has yet made.) I suppose it’s a comfort to know that, if we ever are that certain about anything, modeling it as an image is a possibility.

Also note that higher-than-16-bit channels are very difficult to read with most image software. I start bumping into trouble around 32-bit, and others can expect the same. So, roll your own, or come up with a better method of parsing the data, like a hex editor.

Final Code

void toFreeImagePNG(string fileName, const double width, const double height, double[] data) {
     FIBITMAP *bitmap = FreeImage_AllocateT(FIT_UINT16, cast(int)width, cast(int)height);
     for(int y = 0; y < height; y++) {
         ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
         for(int x = 0; x < width; x++) {
             ushort v = cast(ushort)(data[cast(ulong)(((height - 1) - y) * width + x)] * 0xFFFF);
             ubyte[2] bytes = nativeToLittleEndian(v);
             scanline[x * ushort.sizeof + 0] = bytes[0];
             scanline[x * ushort.sizeof + 1] = bytes[1];
     FreeImage_Save(FIF_PNG, bitmap, fileName.toStringz);

Fun fact: The image used for this page is 16-bit, though the data doesn’t really necessitate it. (One of my generated topographies; I’ve noticed plateauing on my data, and am upping the output to 16-bit, with this, to remove it. The resolution still needs to be scaled up above 256×256, though.)

You can check this, on most Linux distros, with:

$file multarraya_fi.png
 multArrayA_FI.png: PNG image data, 256 x 256, 16-bit grayscale, non-interlaced

Happy sciencing!

¹ Kalat, James W. (2001). Biological Psychology, Seventh Edition, Bellmont, CA: Wadsworth/Thompson Learning

Webcam Sampling

Did you know that this is doable from within Blender? I added a new chapter to The Warrior-Poet’s Guide on it tonight. It shows how to access OpenCV, the binding standard of computer vision, from Python; it also shows how to use ffmpeg to stream webcam data directly into Blender.

Either contains a world of possibilities for image manipulation. OpenCV and ffmpeg both support a variety of concurrent devices, in different manners. In fact, I made a note at the end about someday dedicating a Warrior-Poet’s Guide to each of them. This opens up a range of possibilities for 3D image processing.

My web camera, projected onto the surface of a beveled cube, in quasi-Predator-Vision.

Purchase the book on Amazon to gain access to this wonderful trick. I can’t wait to see what people start doing with it!

Geomorphology with Blender

Geomorphology is the science of the shape of mountains, rivers, islands, and other natural landmasses. Paleogeomorphology is the study of faults, erosion, and even tree rings which provide data on earthquakes which may have happened thousands of years ago.

The most fascinating aspect of geomorphology is the time-bracketing of events. Earthquakes can leave faulting which, relative to known processes like C₁₄ uptake and shoreline erosion, can result in windows of several centuries over which they may have occurred.

For a computational designer, it’s more traditional that mountains and terrains are generated purely from Perlin noise and Voronoi diagrams. However, while the process is remarkably efficient, they never seem to look quite right. Without the involvement of human hands, there is rarely any erosion or character on the terrain. This provides limited value in the context of a game, but what if we want to see a mountain with a history behind it? What if we want the land to tell a story, as real terrain often does?

Generated digital terrain formed in Blender and DLang
Sample terrain fragment after applying wind erosion, basic particulation, tree groves and fundamental water erosion.

An in-house project, currently going on here in the studio, uses a LAN-scoped web interface built in D, and a local server, to evolve land masses over time, from a baseline. Techniques like Söbel softening, very basic Voronoi diagrams, and Perlin distribution of density allow for erosive processes to iterate year by year. Since this is effectively a bake, time hasn’t been as much of an object; however, for 512×512 images it has been moving surprisingly fast. Depending on detail, we can currently run it in roughly two seconds; combined with rendering (including volumetrics) Eevee can have it ready in about fifteen seconds.

Currently, output is saved as a sequence of 8-bit gray scale PNG height maps and density maps, unfortunately that limits us to just 2⁸ or 256 values! An effort is being made to expand it to 16-bit gray scale PNG, bringing us to 2¹⁶ or 65,536 grades, or even 24-bit TIFF at 16,777,216 possible grades. However, most displays are only capable of 8-bit color, so not as many options remain. Most of these files are for scientific output, purely; as far as I know, no monitor is capable of accurately representing anything higher than 16-bit color, and most only 8-bit.

The FreeImage toolkit allows for, in theory, 32-bit gray scale or even double-precision floating point gray scale (64-bit), but that may do more harm to the project than good. After all, portability is important, and nothing meant for the human eye ever seems to go above 16-bits, so there is little support for this in most design software.

In theory, Blender (or any number of other 3D design programs) would truncate it at 16-bits anyway, and after a number is above the atomic unit of operation for a processor (typically 64 bits these days), there’s a noticeable slowdown. Time may be more valuable than precision here.

The remaining standing question is, what was our planet like for its baseline, before any tectonic or fluvial geomorphing? If we wanted to start from a clean-slate, with basic density voxels, would that give us a realistic result? It’s very hard to say at this point in history. We have a rough idea of how the planet formed, but between the stresses of heat and age, could any reliable evidence of that still exist? This period is generally known as the Hadean period, taking its root from “Hades”. The world was indeed very hell-like; it had an abundance of highly radioactive and short-lived elements, which have since decayed, and given that it was a mass of recently-pounded together space rocks, it was ridiculously hot. We’re talking about peak environmental erosion here, so assuming a perfect sphere with reasonable thermal radiance may still apply, whether the world ever looked like that or not.

One of the only remaining elemental minerals from that period is known as Hadean Zircon. Zircon, consisting of ZrSiO₄. It has a fusibility (melting-point) of 2550° C or 4620° F, which is ridiculously high. Its Mohs scale hardness value is around 7.6, a little short of topaz. It is also largely insoluble. Given the gemstone’s relative indestructibility, it seems reasonable that it would be one of the last surviving elements from the Hadean period; even still, only roughly 1% of zircons found globally are confirmed to be Hadean.

Hadean Zircon Fragment
Valley, John W., et al. “Hadean Age for a Post-magma-ocean Zircon Confirmed by Atom-probe Tomography.” Nature Geoscience 7.3 (2014): 219-223

The plan is to move from 2D-mapping to 3D-voxel data, and work from there.