In this tutorial, I discuss how I’m going about synchronizing my outfits across every character, without spending six weeks (or longer) doing it. The compositor is super-handy.
Victoria Amazonica — The Giant Water Lily!
So this was all afternoon yesterday.
For the sake of Strings/Fracture/Symmetry/Whatever-I’m-Calling-It, I needed some jungle life references; and I bumped into the one water lily big enough to support the weight of a small child, or two, without sinking. Apparently they trap enough air under the leaves to provide serious structural support. It was something I had to work with.
The in-game render is likely to begin at only 32 samples and be much smaller; given the current GPU cost crisis (between COVID-19, the inexplicable coin mining addiction, and simply a lack of planning on the part of manufacturers and shippers—and who could blame them?) I’m rendering on a GTX 1660 at best. I’ll be honest, it is a mean little card, but this took a bit more than 12 hours to complete at 1080p with 128 samples.
I’ll likely jump to maybe 128 to 512 samples for the final, after the geometry is all figured out. I also had to determine exactly where I was willing to stop with it—I used geometry nodes for the dew on the leaf, and for the stamen on the flower; I don’t do much with hair particles anymore other than, well, hair.
The leaves actually use two materials, but I applied vertex coloring and used the red channel to bleed one into the other, so it would be less abrupt. The leaf material is entirely procedural; I created a UV map which was ordered by radius along y, so I could make a Voronoi texture stretch out along it, which closely resembles what we get for the veination in leaves. Pipe it into a vector displacement or bump node, along with subsurface color, pick some dark greens and turn up the gloss, and you’ve got something pretty convincing. For the region beneath it I simply generated a spirogram with so many spires coming off of it, and applied it to the material displacement, so we get those neat air pockets.
The only thing I chose to skip on was the spines these plants have beneath them, to deter herbivorous fish. It was late, and I needed it to be done; and there was no guarantee anyone would ultimately notice. Maybe later on. There are a few minor issues in this model, which of course I only notice after spending all night rendering it… but they’re inconsequential in the long game and relatively easy to fix.
In-Game Animations, Part Two
Here’s another little doodle that—well, no, it’s not really a doodle. I spent all week getting this together. However, it should save me a world of time down the line.
I was thinking about characters with visibly changing armor and outfits; as opposed to locked-in-place player sprites where you can use an item, or put on a hat, in function, but don’t see it in practice. I find that to be an atmosphere killer, but I understand it, too—re-rendering clothing is hard work, and even if it’s easy, you still need to pour time into the actual rendering process, and hope you’ve got everything right.
Then, I remembered a trick from video editing. Render each piece of clothing to a layer—with layers which might be in front of it set to something called hold out—and composite them over each other afterwards. I’m currently using Unity for this game, which should be able to do that as well as Blender can.
So, with this system, I can add a new character, given they have proper vertex groups, to the game simply by fitting it over this armature and rendering to disk. I can also add new clothing at any time. Mostly, I can hue shift the clothing to look like something new, even when I haven’t finished the art for it yet, which is good for the end users.
Obviously I’ve still got a few kinks to worry about. A few items of clothing need to be refitted to the armature—like belts. Additionally, there seem to be a few cases where Hansel—that’s the old bald guy—is protruding through his clothing, which is likely a modifier-order thing. Lastly, some of the animations need touch ups; like jumping, which should have much more leg extension. But, I can start working with this.
When I got up this morning, it occurred to me that this project is best released in chapters. The first one is going to be waking up on the ship, getting your bearings, figuring out what happened, and waking everyone else, too. It will work as a transparent tutorial, too, rather than having people do an extra, fun-less level. The next chapter will be planet side.
I think about five of them will give me a satisfactory epic. It’s working beautifully for The Long Dark.
Composited Walk Cycle
Here we’ve got my major weekend project, started and finished on Saturday. Yeah… all of Saturday, but I’m pretty proud of it.
I wanted to make my design more efficient by aligning each item in the scene to the same armature. It almost worked perfectly, and the one notable exception I will still likely be able to fix. I had five characters, and ten sets of clothing.
Each item was rendered to its own layer, with the human item notably set to Hold Out for the clothing. Then, each layer was piped to its own file. Afterward, I could drop each body animation in the Video Sequence Editor, with the animated outfit on top of it (Blend set to Alpha Over), and get this.
Incidentally, this is also an efficient pipeline toward in-game animations, as I’ll just be using select frames for different contexts… which was the idea Friday night.
Compositing is an incredibly useful skill. It might be wise for me to write a WPG on it.
Argos, First Demo
My first demo reel of Argos, the ship’s synthetic intelligence in Strings. She/he was in effective infancy when they left, but has had a thousand years plus of time to react, accumulate, and evolve, primarily driven in spare time by Earth’s literature and myth. When they wake up, this is Argos’ physical presentation of itself; or at least, a few possibilities.
Videos
I just popped a new page onto this blog, with a list of all of my videos over the past year. There’s been a lot going on in my life, and I’m happy to see that I still wasn’t getting too distracted.
The reality is, I have a central web page here. I also have a YouTube channel! And a Patreon, which I haven’t touched in forever but that’s imminently going to change! And I’m an Amazon seller! And I have an Art Station account! And this! And that! And that other thing! The truth is, it’s time to start tying everything together! And damn, man, the internet has gotten complicated.
So, I’m automating the process. Every time I put together a new video, it’s going in my page posts. (I need to take advantage of this thing.) Every so many, I’ll update that core page with my favorites.
So by all means, check it out!
A Case Against using Blender Physics
…Not that it’s always a bad thing. Oftentimes it’s great. But, for the tasks we have in mind, it’s usually going to have more trouble than it sounds. This has nothing to do with how well designed it is, it has to do with the fact that the artist is now partially in the audience, and is under the same spell; and is likely to make improper assumptions about what it can do.
I open this website with a note about the threshold of disbelief—that the real, and imaginary, are limited only by our perception of the media. This is basically rule zero when doing any kind of multimedia work. It’s the difference between hearing a cacophony of vibrating strings, and a symphony. The same goes for a 3D model; you have to keep your eye on your end goal.
That is, an illusion, or maybe an elucidation. You are communicating with your audience. To do that, you need full articular control to do it, which in this case, means full control of the behavior of your model. This is one of the things you inherently give up when you add a physics effect.
I’m sure it sounds, on the surface, like I’m being a jerk here, or failing to appreciate the wonders that Mantaflow and FLIP are capable of. I’m not, they’re awesome; the question is, what is it that you saw simulated with them? Ponds, swimming pools, bathtubs. It is great for these. But what if you’re trying to have water coming out of a faucet? And what if you just need a little water in a glass full of (yikes) non-axis-aligned ice cubes?
If you’ve been screwing around with Mantaflow at all, you know that these are impossible situations. Yes, it is possible in theory to do them; scaling down your domain or turning up its sensitivity, as an example, can hypothetically make a faucet source work. However, physics isn’t computationally simple. It takes time, and CPU cores, to do; and the first few times you’ve consistently screwed something up with it.
So, people get so worked up about the awesome possibilities of Mantaflow, they overshoot them and overlook its costs; while nine times out of ten I’ve found that much more manageable metasurfaces and shape keys will do the job just as well. They’re easy enough to do to almost consistently be used in read-time rendering.
Let’s talk about cloth. Cloth is fun, especially if you have any exposure or background as a couturier, as it behaves exactly like real cloth and borrows a lot of familiar terminology. If you want to make a T-shirt, you literally just have to build faces that match the components of the shirt and sew them together with spring weights.
However, now we have a new concern. Collision. Unless you’re just modeling the shirt—you might be—you’re going to have to get it onto the body of a model. Now you need to worry about collision detection, which means you need to understand collision boundaries. Box, convex hull, or mesh? Should you use a duplicated and simplified collision boundary or not? You’re going to have to go all out to get cloth simulation to consistently work. And, you’re going to hate it. So much.

Once again, though, people bump into the physics tab and get so excited about the options, from rigid body to soft body to fluid (liquid, smoke, and fire) to cloth. They never even notice that there’s a cloth brush, in sculpt tools, meant to help with this exact task in an operationally manageable way.
And, speaking of fire and smoke, they’re fun, and they’re not that hard to get working. Unfortunately, they’re also extremely CPU hungry. Why am I worried about CPUs? Can’t I just “get better hardware”? In theory; but CPU equates to time, which we never, ever have enough of. Don’t fool yourself, you’re already running out of it. That has little to do with hardware, and more to do with how you’re dedicating your use of it.
If you can choose between an ultra-realistic path determination for a single “molecule” of water, or an extremely impressive material, the material will always be more beneficial.
For fire and smoke, it’s much better to get it working within acceptable parameters, bake the whole thing to a folder, and import the contents of that folder (specifically the data
directory) as a volume. Once you’re done editing it, you should never allow it to change itself, as a general workflow principle. The artist must emulate God with these tasks, and volumes, while relatively new to blender, are so much faster, and often more customizable, than a physics calculation.
The thing no one is inclined to realize about Blender’s physics modifiers is that they’re basically advanced noise generators. You have as much control over them as you do over a particle effect. Much like particles, they have their place; but you can’t jump avant garde into them without expecting their effect to spill over into the rest of your animation.
At the end of the day, Blender Physics is not computational physics. With rare exceptions, your computer can’t even handle computational physics; that’s the kind of job that’s typically allocated to a cluster. Physics emulation follows the same rule as everything else—does the viewer find it believable? What do they see?
So, when attempting to emulate a physical effect, I suggest this (flexible) routine:
0. Verify the need. Are you sure you need a physics simulation to do this? Are there any sculpt tools, modifiers, constraints, or material effects, which could do the job for you? Is it something that you could literally just do in compositing? If so, these will be quicker and easier paths.
- Isolate it. You’re still working with pixels and samples, in the end. You don’t need your effect to spill over into other materials. I will often create an entirely new file to model it in, and simply
Append
the finished product into my master animation when I’m done. You most certainly do not want more than one unbaked physics simulation going at the same time. - Research it. There’s
Quick Smoke
andQuick Fire
, sure, but do you really understand what they’re doing? Do you know how to customize them to meet your vision? I strongly suggest you put some time aside—as valuable as time is—to look up what all of these controls specifically do. If you’re going to apply a physics effect on a regular basis, it’s important to recognize that it is a tool in and of itself, which requires mastery. There really aren’t any useful “wizard”-like interfaces for these. - Allocate for it. Unless you’re working on something very low budget or are immediately running out of time—which is, actually, a great reason not to bother with physics at all—this is going to take a while to get right. Give yourself a day if you’re completely new to it, and if you’re really feeling the creative impulse to pursue it, don’t be surprised if it overflows into a few days.
- Bake it once you’re done. This applies to everything—rigid and soft bodies, fluid simulations, cloth if it’s got to be attached to an armature. Once you’ve got it behaving how you want, commit it to RAM and drive space. Having multiple physics simulations going at once can increase rendering time geometrically; so it’s always better to have it sorted to key frames. Animation key frames don’t really slow things down at all.
Physics emulation is kind of like handing off your design to a less experienced designer, like an intern, and giving them total creative freedom with it; then stepping back in a few hours later.
We don’t expect Photoshop or Quark to have physics effects. Hell, we don’t even expect After Effects to have them. So, why on earth do we fail to see them for what they are in Blender? I’ve said it once and I’ll say it again—you must know your noise.
Animating Sculpted Meshes in Blender
Or, How to Apply One of Those Weird Modifiers Which You Never Thought You Would Use
Introduction (Which may be Safely Skipped)
Blender is an extraordinarily complex and beautiful piece of software, and I’m not sure what anyone else would be expecting of it. Of course it is. Aside from all the passion of Ton Roosendaal, numerous contributors have been adding to the project over the years in an effort to create a full-featured multimedia production suite out of what once was a simple 3D model editor.
In fact, sometimes it gets overwhelming. As frequently as I use it, I’m still cracking the surface of a lot of its finer algorithms and tool kits. This article exists to address a common concern among medium-tier modelers.
You see, you can extrusion-model relatively easily with Blender. It was actually the first type of modeling I ever did with it, not long after it was made public license. Of course, this was what, 2002 or 2003, and being a humble American college kid I had only an idea of what I wanted to make, but no idea how to accomplish it. However, the software has expanded to include metaballs, various types of curves, volumes, lattices, probes and even sounds. Weight painting, vertex coloring and grouping, and armature deformation, along with shape keying, has gotten very accessible.
Moreover, it’s introduced a remarkably sophisticated sculpting kit. It’s powerful enough with a mouse, but with a graphics pad (even a cheap one) it’s almost intuitive once you understand the basics.
The actual problem only shows up after you’ve mastered applying armatures to extruded models, and after you’ve sculpted a mesh in fine detail. You have your magnum opus, your masterpiece, your triumphant paragon of a million carefully placed faces, matching precisely what you had in your mind. You understand anatomy, and the rule of thirds; the golden ratio and the laws of balance of form. You move to animate your sculpture, attaching it to a handy metarig and parenting it with automatic weights. And…

Nothing happens. You get a message, in an almost intentionally uncomfortable shade of orange, warning you that “one or more” bone envelopes could not be calculated. It’s not especially helpful, and you have no idea what went wrong; but your mesh has all ten-thousand-ish vertex groups added and every single one of them is empty.
Don’t despair just yet.
Problems with Automatic Weights
We owe automatic weights a great deal. If they didn’t exist, we would probably never start animating. I personally find that while manual weight painting has come a long way, it’s still rather clunky and something I prefer to avoid; so simply painting in the weights isn’t usually going to do the job.
The most common reasons (and I do mean common) for this bug to occur is having two separate component meshes in an object which intersect in a weird way (or really almost any way), throwing the calculation off; duplicated vertices; and in my experience, malformed normals. Separating meshes, the less common of the two, is relatively simple once you’ve found the offending geometry; but the second one is more automatable.
You can sometimes get away with switching to edit mode, selecting everything, and running the Triangulate
operation. This splits every single face into component triangles. You can then select all of those triangles and run Tris to Quads
, to convert them into quadrangles. You can also precede this with Merge by Distance
(formerly Remove Doubles
) to remove any stray duplicate vertices.
Quadrangles (or “quads”) are easily the easiest face type to get along with on an armature; they’re flexible, deformable, and sensible; and I don’t personally like mucking with anything else. However, once you have enough faces on your mesh, this can get inaccurate or incomplete. I’m not concerned about the relatively scattered nature of the quads, I’m more concerned about the fact that sometimes it can’t merge them all and you will still have a few hundred triangles to figure out.
Forget this notion I keep hearing about, that all polygons being broken down into triangles on the graphics card. That’s not how graphics cards work mathematically, it’s only how they work incidentally; and that’s not even necessarily true anymore. A triangle is easily just a truncated quad, and quads are much easier to guide into convincing organic shapes without weird seams showing up.
The other operation is the Remesh
modifier, which I still often use; but it comes at a cost. Remesh
will scan over your mesh as a volume, and build a new (and perfectly armature-friendly) mesh over it occupying roughly the same volume. (It’s effectively a voxel scan.) It’s very good at getting an animatable mesh; but the problem is that it screws up your topology which you’ve been working so hard on. The second problem is that you can easily exceed a million faces zooming in with it, which is not an easy operation to undo.
There’s a way to apply either of these that will not change your topology at all, and will allow your model, no matter how godlessly non-manifold your vertices may be, to properly deform—using the Transfer Data modifier and a duplicate object.
Duplicate-Remesh-Transfer
The thing to remember about bone groups is that they’re vertex groups with weight gradients; and those will work on anything regardless of topology. So, what we can do is simply duplicate (not instance, by the way—literally duplicate!) our mesh, apply a remesh (or quadrangulation) to our new mesh, and fit it to our armature; then transfer the vertex data over to our unaltered original on the basis of the nearest relative vertex.
Start by duplicating your object with Shift+D
, in the 3D View-port space. This is going to be our crash-dummy object which will have its vertex data destroyed; we won’t need it forever. All of our lossy modifications are going to happen on this.
Apply Remesh
, and only increase the iterations (or decrease the step size) until the new mesh covers all of the area you expect your armature to deform. Don’t worry about what’s happening to its appearance, we won’t be using it for anything else. When you’re satisfied, apply the Remesh
.
Alternatively, you can switch to edit mode and Merge by Distance
, Triangulate
, and Tris to Quads
your vertices, if it works better for you. As always, I encourage experimentation and getting to know your toolkit.
Parent your armature to it, with automatic weights, and our topology-damaged mesh will animate smoothly. Feel free to test this before you continue to the next part.
Switch to your original mesh, and add a new modifier. In the top-left corner of the Add Modifier
drop-down, you will find Transfer Data
; this is what we want. What Transfer Data does is map data from one object’s vertices, edges, and faces (if specified) to its host with an approximation algorithm. As far as I can tell, it can be used for almost anything.

We could ask for a lot of different things from the Data Transfer modifier, including vertex colors, smoothing, even freestyle data; but for now, we’ll just want to check Vertex Data
and beneath it, Vertex Groups
.

Remember how your vertex groups were automatically populated in your duplicate mesh, which should still (very roughly) approximate your original? What we will be doing here is copying data from it. Click Source
and select the name of your duplicated object. This will be our data source.
For this next part, I believe it is very important to ensure that your origin point for your original, and deformed, objects is at exactly the same place. It will save you some time. You could in theory also do this by using local coordinates, but that isn’t the default, and most of us are, after all, on some form of deadline.
We will next want to click on Generate Data Layers
. What this does is create the essential layers for our original object from our deformed object. If I’m going to be straightforward here, I have no idea exactly how essential it is, but it does allow you to preview your weight paints and ensure that everything is working well.
For Mapping
, I generally recommend Nearest Vertex
, and Mix Mode
should be Replace
, unless you’ve got something weird going on (and every artist who stays busy does, now and then). This determines how the property at the host object’s vertex should be interpolated, from the properties of the old object’s vertices. Feel free to play with it when you’ve got some patience to look at the other options.
If you have a lot of faces in your mesh, and I’m assuming that you do, this will take a moment to complete. Don’t worry, Blender isn’t freezing up, though the process may take up an entire core of your CPU to finish. For each and every vertex, it is scanning a BVH of the other mesh and interpolating between closest matches—in essence you’ve just got a lot of data for it to work through. Go make a coffee or check your mailbox, and come back; you’ll hopefully only need to do it once anyway.
Attaching to the Armature
Almost there.
You’ll want to apply the modifier, too. This will also take a moment. Once the modifier is applied, you can check on Vertex Groups
to ensure that all of your bone groups have been transferred, and have been populated in weight paint mode.
Now, we just need to attach the armature; for which you’ll notice all of the bone groups are already properly named. To do this, select the original mesh, and then the armature. Press Control+P
, and—and this part is important—do NOT select Automatic Weights
. Select Armature Deform
, the heading itself, and that’s it. If you select Automatic Weights
or any other envelope calculating option, there’s a very good chance that it’s going to ruin your vertex groups, and you will have to start over. Since we’ve already configured our vertex groups, we do not need to do that, and should pointedly avoid it.
Once you’ve done this, you can check your armature in pose mode and ensure that your geometry, no matter how godlessly non-manifold it may be, deforms accordingly. All we needed to do was work around the weight painting constraints, and transfer the resultant data over to our original mesh, which Transfer Data
is designed to do.
You can most certainly delete your remeshed copy afterward; you will not need it again.
Conclusion
When presented with a daunting task it’s important to step back and consider what the issue may actually be. In this case, it’s all about calculating vertex weights; and this can easily be done on a remeshed duplicate and cloned over with Transfer Data
.
This makes virtually any geometric construction animatable in Blender, without compromising topology or detail.
Keeping a Texture Aligned on a Sculpt-able Mesh
UVs are awesome. They align simple 2D images—a lot of them, actually—on a material, even when the associated mesh moves, even when that mesh is animated. You can even have more than one map, if you need it, and we should all be extremely grateful that they exist.
That said, sometimes they don’t. As an example, when sculpting, a task that is particularly important for humanoid models, there is a feature called “DynTopo” in Blender (or “Dynamesh”, the near-but-not-quite equivalent, in ZBrush) that does not preserve UVs and, looking at the math, can’t necessarily be trusted to preserve the right UVs if it does. With our models, we do this a lot.
The general routine is to finish sculpting (or whichever other miscellaneous UV-destructive task you need) before unwrapping our mesh into UVs. This isn’t bad, all things considered; but from an artist’s workflow it can be extremely limiting. What if we want to add another feature later, or something we have planned doesn’t work out like it does in our heads? What do we do, just use an unrealistically small polygon size?
There are idiot ways around this, and then there are smart ways around it. The most popular among them is perhaps the multiresolution modifier, which is great, but it’s not the only one, and my favorite is much more subtle than that. I’ve rambled enough; let me show you how to select specific points on your model (in Blender, in this case) and map a texture by displacement from them, regardless of what happens to your polytope.
Adding your Control Points
Technically, you could use any object to do this. Sometimes a volume or a curve is more sensible; but, in the interest of any systems engineer, I like to do the minimum amount of damage when I add a new feature, control points notwithstanding.
In Blender, there is such an object as an “empty”. It does not render, at all. You can choose multiple ways of representing it, and the only service it provides is to remember a reference point in your image. If we were talking about GIMP or even Inkscape (which is a bit more complicated with it), we would call these guides; they exist purely for the renderer and artist’s sake.
Presuming that you’re starting with a basic 3D humanoid, place your 3D cursor over the solar plexus. If you’re rendering something else, go with a rough center point to its anatomy, from which everything else can be rendered. This is a question purely of what will work for the artist; but it is the point you will be measuring from, so keep it close to the root of your armature.

From here, we can add an empty. In Blender, this occurs with Shift+A and is its own category. Subcategories are all empties but include such things as plain axes (which I will be using here), arrows, a single straight arrow, a number of simple geometric primitives, and if none of those work you can even use an image.
Plain axes are the knee-jerk choice, but for visible clarity, let’s choose “single arrow” instead. Plain axes are a little difficult to locate inside our mesh.


Start by parenting your mesh to your new empty. To do this, select the empty, then the mesh (with Shift held down), and type Ctrl+P. Whatever your mesh now does, your empty will do too.
Now, let’s discuss what the empty can tell your renderer. It’s clearly helpful as a visual guide, especially if you’re in some orthographic view or something even weirder; but what about our material?
Material Linking
Blender has a very fancy material editor, even compared to a number of pricier products. It is actually possible to link any mesh properties, including those of other meshes, to this data, if you’re careful.
We’re going to be very basic about this, for the sake of the demonstration. (As a friendly note to other tutorial writers, I will say this. As much as I love to watch a good sculpt, I’m always a bit disappointed by your generic YouTube tutorial that spends half an hour addressing tangential needs, no matter how cool it can be to watch. I’m often on a timeline, which is the only reason I don’t just take a course on the subject. I know you mean well and appreciate the work, but please be considerate.)
Create a new material for your mesh, and make it a simple diffuse material. (You could make it anything, but we won’t need much for the core concept.) We’ll begin with two nodes, a simple shader that produces a closure for diffuse lighting, and our output. We will vary our color by distance from the empty.

To your color input, attach a Color→Hue Saturation Value node. Set its color parameter to full-red and zero-blue and zero-green. Saturation and value can remain at one. We will be rotating the hue by distance from our solar plexus.
Classically, if we just wanted the relative displacement in local coordinates, one way to do it is to simply use the vector difference between Geometry→Position and Object Info→Location. This gives us the distance between our literal geometric coordinate, and our object origin. If you weren’t concerned about direction and are more concerned about distance, you can just take the length of this. You could do a number of other things, too, but we’ll be keeping it simple for now. Let’s start by doing this.

You’ll see with this shader that your actor now has a rainbow hue from the hue rotation, centering around his origin.

In my case, the origin is near the feet, so the hue rotation occurs mostly by z-coordinate. This isn’t what we want, but it’s a start. We’re aiming for our external empty, aren’t we?
The only problem with the current state of Object Info is that it only provides info on our shaded object, no other object. However, we have a handy feature to the rescue, one related to the fact that Blender is, before anything else, a Python IDE by design. Drivers.
Replace your Object Info node with a Converter→Combine XYZ node. This, incidentally, is also a sort of Value node but for vectors. We’re going to tie it to our empty location.

Drivers
Drivers are a severely (severely) under-recognized feature of Blender. They allow you take nearly any property of your scene, and bind it to some other property. You could change glossiness by location, as an example; or material by light exposure. They’re seriously not talked about enough. We’ll be using one here.

After replacing your Object Info node with a Combine XYZ node, hover over the X-coordinate and press Shift+D. (Or, if you’ve mucked with your default keyset and Shift+D no longer works, right click it and select Add Driver. This opens up an editor that provides no shortage of cool possibilities.
There are a few things I would like to highlight about it, but I do encourage you to play with it, break it, fix it, and learn how it works inside and out on your own time. But for now…
There are several types of drivers, selectable immediately under settings. They carry out understandable and frequently used expressions; but are all effectively subsets of the default, which is a scripted expression. We’ll be using that.
The expression itself takes any number of declared variables and performs an operation on it in Python script. (It is not yet possible to import an entire text-editor-space script like this, but this is literal Python, so I’m sure it’s doable. Our needs are more basic.)
The default expression is var, which is the default variable which we can link to other elements in our scene, plus zero… for some reason. That’s an identity operation, I don’t know why they did it, but I’m guessing it’s for illustrative purposes. This is, fortunately, all we need, but remember that sqrt(var)
, pow(var, 3.5)
, and var/12.0 + pi
are all perfectly valid if they serve your needs…just make sure it’s deterministic and interpretable Python.

Beneath this expression, we have a list of our “input variables”, which are the facets of our scene that we want to use. Initially, var
will be colored bright red in its object socket, as having that empty resolves to total nonsense for Blender. It will immediately tell you that with in the window.
That isn’t strictly true; there’s nothing wrong with the default expression. However, it doesn’t know where to pull the data from, which is just as problematic. To change this, we will click on that Object box under var
‘s definition and select our empty. Type can remain “X Location” and space can be “World Space”.
You should now have a driver-bound coordinate in your Combine XYZ node. Try moving your empty around on the X-axis, and watch what happens to the node when you place it. We can do the same for the Y-axis and the Z-axis, making sure to select “Y Location” and “Z Location” when we do it, and we’ll have our model shaded relative to that point.
Take a look at your model now, and for a little fun, drag the empty around in 3-space.
Results!

Just like that, with the appropriate use of drivers, you can connect any object to a shader. Go ahead and add dyntopo sculpting to it if you would like; as there’s no UV map added, it won’t affect our shader.
That said, this is not the only way to do this or even always the best way. Once an artist is absolutely certain that sculpting is done and topology is fixed, it’s still a good idea for them to replace these with UV maps. Mapping, in general, can do a lot to make any model look more realistic.
However, in cases where you might be modeling multiple shaders over each other in the same material, this is often the best way to do it. I personally am working on a professional model of the ancient Babylonian monster-god-of-fresh-waters, Absu, and needed to add both a visible poison pulse, and a golden mud melting off of him; and I could come up with nothing better for multiple features than drivers and empties.
It’s for a syndication, but as I have limited processing power (here’s a little secret… nobody has infinite processing power or time, not even at major studios) and am working on my next book at the same time, I needed something quick and reliable.
Some day I might write a text on drivers by themselves; but I’ve still got a great deal to learn about them myself. They’re the easiest way to add a major feature to Blender without diving into the actual C++.
I hope this method becomes more popular as I don’t meet a lot of other modelers who make use of it, and I’ve yet to bump into a serious problem with having it in my model source. Perhaps it will help all you guys out too.
As a full disclosure, the particular humanoid model used in this tutorial was done with MBLab, an add-on produced for creating humanoid armatures and associated meshes in Blender. It’s open and it’s awesome. I don’t personally usually use it, as I feel it’s too all-encompassing to be any good for me when I’m doing weird stuff with my models; but it’s still great for pounding out an idea for basic testing, or getting something ready-to-go for a tutorial. Needless to say, all vertex groups, constructive modifiers, and so forth were removed before the tutorial itself.
(It is definitely not even close to my Absu mesh. I’ve been working for two weeks on that guy with nothing but virtual clay and a graphics pad. He is humongous, terrible, glorious, and monstrous, and should probably be shared here when I’m done!)

Thank You for your Impassioned Response!
High time, I think, that I dropped an update on here.
The reception for my first two books this year has been overwhelming—I’m feeling a lot of gratitude. It’s also helpful for morale, to see so many people grabbing them off the digital shelf. Writing a proper book is not a small task; and for those of us with a real goal, not just writing-for-the-sake-of-writing, it’s not something that should be taken lightly, either.
My next two books are, almost invariably, going to be on LV2—the open and modular sound processing API that has asserted itself as a likely future for VSTs—and OSL, or Open Shading Language, the customizable shader system built into Blender. They’re distinctly separate subjects from my last two books, on Blender plug-ins (unrelated, or at least not immediately related, to shader programming) and GIMP plug-ins.
LV2 is a step away from the last two, which arguably revolved around creating plug-ins for 3D and 2D art. This is audio. Audio is a sophisticated subject; while it’s easy enough to play back a sine wave, once you’re working with envelopes and the frequency domain you need to start considering things like Fourier transforms and complex analysis. That’s definitely not the kind of thing you can rush through explaining.
As flagrantly-industry-insider as all of that sounds, I promise these things are actually kind of fun. The point of the book is convey them in a tangible way, one that’s easy to understand. It’s new terrain, but you can do so many cool things with it!
It’s arguable that these topics will be beneficial to graphics programming, too; JPEGs actually compress the entire image as a Fourier transform. God knows there are all sorts of cool plug-ins I could make using it. However, it requires a steady pace to explain, when I first learned about Fourier transforms it was rushed at me. It took about a month before I had my Eureka moment and understood them; and looking back it’s so painfully simple. I’m not going to make the same mistake explaining it to you all; you’re going to get them introduced in the most colorful way I can manage.
Additionally, unlike with Blender and GIMP, I need to go over latency compensation. There’s almost no such thing as a latency-free effect; the closest I can get to one is maybe a simple amplifier. Processing digital signal data takes time, and whether you’re dealing with real-time audio (you totally can) or a post op, that latency needs to be taken into account. Heck, if we’re building something like a delay, latency is the intended effect.
This means that LV2s have to worry about statefulness—internally remembering what it was that they were doing—and make requests to supporting host programs to call them again, so they can finish. They need to be able to reset that state, too.
Unlike Blender plug-ins, which can get away with using Python; and GIMP plug-ins, which produce a static result, LV2s can rely on the fact that they are running in parallel with a great many others of their ilk. Efficiency is more important than ever before. Additionally, while I can clearly see that their older version (LADSPA) took direct influence from the structure of GIMP plug-ins, the new version improves on that by providing TTL (pronounced turtle) metadata, so the plug-ins don’t have to be run in order to understand what they do.
There is far less documentation available for LV2 than Blender or GIMP; but I’m experienced using them and I have a plan. To check myself, I’m even building a Hurdy-Gurdy contemporarily, just to help me visualize the audio mathematics. By the end of the book, I should have covered a wide swath of common guitar effects, and toward the end of it I may even do something original, like imprint a JPEG as a spectrogram and play the image as a sound. (You know, the usual weirdness.)
The unfortunate part of my reality is, after finishing the GIMP book I suffered from about two weeks of cognitive fatigue! I suppose it’s understandable, the new one hasn’t been much different. I’ve already edited over about half of my wording on this book so far, finding a structural or ethical problem with its presentation, and have gone well over 60,000 words in total, not counting code. This is usually how it goes, though; and two edited books written in four months is pretty extreme already. Thank God I used to be a science journalist or I would never be able to pull this off.
I know that I still need to get paperback copies of The Warrior Poet’s Guide available—that is on its way, I’m simply deferring that edit until after the fourth book. Amazon is currently offering a deal that will allow me to sell the paperback copies to people who already own the eBook for a couple of bucks each, which sounds quite reasonable. (Everyone enjoys the feel of a good paperback, for all the benefits of an eBook.) I admit I wasn’t entirely stoked about the idea at first—paper isn’t cheap and these books are built on evolving subjects—but apparently they can be printed by order. I’m still sorting it out, but they are coming, likely before 2021.
I’m debating also covering VST with it, since the two are technically inter-operable (with a little wrangling) and most DAWs can support both through the use of one. I’m not a huge fan of the details of VST’s implementation, but I do love its concept. We’ll see if that’s a good idea, or if it calls for a book of its own.
So, the site has been quiet because I’ve been working on the next book, on audio. This is likely to happen again when I get to OSL (a sort of modernized incarnation of RenderMan), but let no one say that The Warrior-Poet’s Guide is slowing down or dead! It’s all I can think about right now.
Depending on how negotiations go, I may also end up selling some T-shirt designs on Amazon. There’s also been some renewed interest in my old Patreon account, which isn’t technically attached to anything anymore, and I’m thinking about rebuilding it around the media-hacking plug-in culture. With 2019-nCoV spreading in the US, I’ll be stuck inside for a while anyway. (There have been three confirmed cases in my town as of yesterday, with who-knows-how-many unidentified cases still in the air.) So, expect me to keep busy in the studio!
Happy Media-Hacking,
—Mick