Keeping a Texture Aligned on a Sculpt-able Mesh

UVs are awesome. They align simple 2D images—a lot of them, actually—on a material, even when the associated mesh moves, even when that mesh is animated. You can even have more than one map, if you need it, and we should all be extremely grateful that they exist.

That said, sometimes they don’t. As an example, when sculpting, a task that is particularly important for humanoid models, there is a feature called “DynTopo” in Blender (or “Dynamesh”, the near-but-not-quite equivalent, in ZBrush) that does not preserve UVs and, looking at the math, can’t necessarily be trusted to preserve the right UVs if it does. With our models, we do this a lot.

The general routine is to finish sculpting (or whichever other miscellaneous UV-destructive task you need) before unwrapping our mesh into UVs. This isn’t bad, all things considered; but from an artist’s workflow it can be extremely limiting. What if we want to add another feature later, or something we have planned doesn’t work out like it does in our heads? What do we do, just use an unrealistically small polygon size?

There are idiot ways around this, and then there are smart ways around it. The most popular among them is perhaps the multiresolution modifier, which is great, but it’s not the only one, and my favorite is much more subtle than that. I’ve rambled enough; let me show you how to select specific points on your model (in Blender, in this case) and map a texture by displacement from them, regardless of what happens to your polytope.

Adding your Control Points

Technically, you could use any object to do this. Sometimes a volume or a curve is more sensible; but, in the interest of any systems engineer, I like to do the minimum amount of damage when I add a new feature, control points notwithstanding.

In Blender, there is such an object as an “empty”. It does not render, at all. You can choose multiple ways of representing it, and the only service it provides is to remember a reference point in your image. If we were talking about GIMP or even Inkscape (which is a bit more complicated with it), we would call these guides; they exist purely for the renderer and artist’s sake.

Presuming that you’re starting with a basic 3D humanoid, place your 3D cursor over the solar plexus. If you’re rendering something else, go with a rough center point to its anatomy, from which everything else can be rendered. This is a question purely of what will work for the artist; but it is the point you will be measuring from, so keep it close to the root of your armature.

Suggested alignment on a humanoid model

From here, we can add an empty. In Blender, this occurs with Shift+A and is its own category. Subcategories are all empties but include such things as plain axes (which I will be using here), arrows, a single straight arrow, a number of simple geometric primitives, and if none of those work you can even use an image.

Plain axes are the knee-jerk choice, but for visible clarity, let’s choose “single arrow” instead. Plain axes are a little difficult to locate inside our mesh.

Start by parenting your mesh to your new empty. To do this, select the empty, then the mesh (with Shift held down), and type Ctrl+P. Whatever your mesh now does, your empty will do too.

Now, let’s discuss what the empty can tell your renderer. It’s clearly helpful as a visual guide, especially if you’re in some orthographic view or something even weirder; but what about our material?

Material Linking

Blender has a very fancy material editor, even compared to a number of pricier products. It is actually possible to link any mesh properties, including those of other meshes, to this data, if you’re careful.

We’re going to be very basic about this, for the sake of the demonstration. (As a friendly note to other tutorial writers, I will say this. As much as I love to watch a good sculpt, I’m always a bit disappointed by your generic YouTube tutorial that spends half an hour addressing tangential needs, no matter how cool it can be to watch. I’m often on a timeline, which is the only reason I don’t just take a course on the subject. I know you mean well and appreciate the work, but please be considerate.)

Create a new material for your mesh, and make it a simple diffuse material. (You could make it anything, but we won’t need much for the core concept.) We’ll begin with two nodes, a simple shader that produces a closure for diffuse lighting, and our output. We will vary our color by distance from the empty.

A basic diffuse shader

To your color input, attach a Color→Hue Saturation Value node. Set its color parameter to full-red and zero-blue and zero-green. Saturation and value can remain at one. We will be rotating the hue by distance from our solar plexus.

Classically, if we just wanted the relative displacement in local coordinates, one way to do it is to simply use the vector difference between Geometry→Position and Object Info→Location. This gives us the distance between our literal geometric coordinate, and our object origin. If you weren’t concerned about direction and are more concerned about distance, you can just take the length of this. You could do a number of other things, too, but we’ll be keeping it simple for now. Let’s start by doing this.

A basic way to rainbow-shade based on distance from object origin

You’ll see with this shader that your actor now has a rainbow hue from the hue rotation, centering around his origin.

In my case, the origin is near the feet, so the hue rotation occurs mostly by z-coordinate. This isn’t what we want, but it’s a start. We’re aiming for our external empty, aren’t we?

The only problem with the current state of Object Info is that it only provides info on our shaded object, no other object. However, we have a handy feature to the rescue, one related to the fact that Blender is, before anything else, a Python IDE by design. Drivers.

Replace your Object Info node with a Converter→Combine XYZ node. This, incidentally, is also a sort of Value node but for vectors. We’re going to tie it to our empty location.

Our Final Example Shader

Drivers

Drivers are a severely (severely) under-recognized feature of Blender. They allow you take nearly any property of your scene, and bind it to some other property. You could change glossiness by location, as an example; or material by light exposure. They’re seriously not talked about enough. We’ll be using one here.

Add Driver option

After replacing your Object Info node with a Combine XYZ node, hover over the X-coordinate and press Shift+D. (Or, if you’ve mucked with your default keyset and Shift+D no longer works, right click it and select Add Driver. This opens up an editor that provides no shortage of cool possibilities.

There are a few things I would like to highlight about it, but I do encourage you to play with it, break it, fix it, and learn how it works inside and out on your own time. But for now…

There are several types of drivers, selectable immediately under settings. They carry out understandable and frequently used expressions; but are all effectively subsets of the default, which is a scripted expression. We’ll be using that.

The expression itself takes any number of declared variables and performs an operation on it in Python script. (It is not yet possible to import an entire text-editor-space script like this, but this is literal Python, so I’m sure it’s doable. Our needs are more basic.)

The default expression is var, which is the default variable which we can link to other elements in our scene, plus zero… for some reason. That’s an identity operation, I don’t know why they did it, but I’m guessing it’s for illustrative purposes. This is, fortunately, all we need, but remember that sqrt(var), pow(var, 3.5), and var/12.0 + pi are all perfectly valid if they serve your needs…just make sure it’s deterministic and interpretable Python.

Beneath this expression, we have a list of our “input variables”, which are the facets of our scene that we want to use. Initially, var will be colored bright red in its object socket, as having that empty resolves to total nonsense for Blender. It will immediately tell you that with in the window.

That isn’t strictly true; there’s nothing wrong with the default expression. However, it doesn’t know where to pull the data from, which is just as problematic. To change this, we will click on that Object box under var‘s definition and select our empty. Type can remain “X Location” and space can be “World Space”.

You should now have a driver-bound coordinate in your Combine XYZ node. Try moving your empty around on the X-axis, and watch what happens to the node when you place it. We can do the same for the Y-axis and the Z-axis, making sure to select “Y Location” and “Z Location” when we do it, and we’ll have our model shaded relative to that point.

Take a look at your model now, and for a little fun, drag the empty around in 3-space.

Results!

Just like that, with the appropriate use of drivers, you can connect any object to a shader. Go ahead and add dyntopo sculpting to it if you would like; as there’s no UV map added, it won’t affect our shader.

That said, this is not the only way to do this or even always the best way. Once an artist is absolutely certain that sculpting is done and topology is fixed, it’s still a good idea for them to replace these with UV maps. Mapping, in general, can do a lot to make any model look more realistic.

However, in cases where you might be modeling multiple shaders over each other in the same material, this is often the best way to do it. I personally am working on a professional model of the ancient Babylonian monster-god-of-fresh-waters, Absu, and needed to add both a visible poison pulse, and a golden mud melting off of him; and I could come up with nothing better for multiple features than drivers and empties.

It’s for a syndication, but as I have limited processing power (here’s a little secret… nobody has infinite processing power or time, not even at major studios) and am working on my next book at the same time, I needed something quick and reliable.

Some day I might write a text on drivers by themselves; but I’ve still got a great deal to learn about them myself. They’re the easiest way to add a major feature to Blender without diving into the actual C++.

I hope this method becomes more popular as I don’t meet a lot of other modelers who make use of it, and I’ve yet to bump into a serious problem with having it in my model source. Perhaps it will help all you guys out too.

As a full disclosure, the particular humanoid model used in this tutorial was done with MBLab, an add-on produced for creating humanoid armatures and associated meshes in Blender. It’s open and it’s awesome. I don’t personally usually use it, as I feel it’s too all-encompassing to be any good for me when I’m doing weird stuff with my models; but it’s still great for pounding out an idea for basic testing, or getting something ready-to-go for a tutorial. Needless to say, all vertex groups, constructive modifiers, and so forth were removed before the tutorial itself.

(It is definitely not even close to my Absu mesh. I’ve been working for two weeks on that guy with nothing but virtual clay and a graphics pad. He is humongous, terrible, glorious, and monstrous, and should probably be shared here when I’m done!)

Sculpting Example, post-shader. (Sorry. I just finished playing Doom Eternal, which was weirdly therapeutic; and it’s still kind of with me.)

Published by Michael Macha

I'm a game developer for both mobile and PC. My education is in physics, journalism, and neuroscience. Founder and CEO of Frontier Medicine Entertainment, located in the beautiful city of Santa Fe, New Mexico.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: