Triplanar Projection
Purpose
Triplanar texturing drops UVs entirely — it samples the same texture three times from the X, Y, and Z world axes, then blends them by which way the surface is facing, so any mesh of any shape can be textured without unwrapping.
Key insight
UVs are a flat map for a 3D surface — every artist who's unwrapped a cliff or a procedurally-generated mesh knows it's a battle of stretching and seams. Triplanar sidesteps the whole problem. Project the texture as if it were sprayed from three orthogonal directions: X-facing walls get a YZ projection, Y-facing floors get an XZ projection, Z-facing faces get an XY projection. Each pixel blends the three samples weighted by how much its normal points along each axis.
An upward-facing floor is almost entirely the XZ projection. A vertical wall is almost entirely one of the verticals. A 45° slope is a blend of two. The blend-crispness dial controls how abrupt that transition is — low gives smooth crossfades but muddy corners; high gives clean separation but visible pinching where two projections meet.
This is the terrain technique: rocky planets, cliffsides, voxel output, marching-cubes surfaces. Any mesh where unwrapping is insane.
Break it
Blend-crispness to 16 on the sphere. Three clear quadrants emerge like the Mercedes logo, with pinches at the axes where two projections meet head-on. Teaches: the limit case of "always prefer one projection" is a visible trivariant split. The sweet spot for most art is around 4–8, where the dominant projection wins but corners still crossfade.
Blend-crispness to 1. On the cube each face still reads clearly, but on the sphere the three samples crossfade so softly that corners go muddy. Teaches: triplanar always trades corner crispness against seam visibility. There's no free lunch — this is the one real knob, and it's a continuum.
Direct Claude
// VERTEX — pass object-space position + normal (so texture sticks to
// each mesh as it rotates). In production terrain this is world-space
// instead, but the triplanar concept is identical either way.
varying vec3 vObjPos;
varying vec3 vObjNormal;
void main() {
vObjPos = position;
vObjNormal = normal;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
// FRAGMENT — three samples, blended by which axis the normal points along
uniform sampler2D uTex;
uniform float uScale; // texture scale
uniform float uCrispness; // exponent on the blend weights
uniform vec3 uLightDir;
varying vec3 vObjPos;
varying vec3 vObjNormal;
void main() {
vec3 n = normalize(vObjNormal);
// three planar UV sets, one per axis
vec2 uvX = vObjPos.yz * uScale;
vec2 uvY = vObjPos.xz * uScale;
vec2 uvZ = vObjPos.xy * uScale;
vec3 sX = texture2D(uTex, uvX).rgb;
vec3 sY = texture2D(uTex, uvY).rgb;
vec3 sZ = texture2D(uTex, uvZ).rgb;
// blend weights: how strongly the normal points along each axis,
// raised to a power to sharpen the winner
vec3 w = pow(abs(n), vec3(uCrispness));
w /= (w.x + w.y + w.z);
vec3 albedo = sX * w.x + sY * w.y + sZ * w.z;
// simple Lambert so the 3D surface reads
float diffuse = max(0.0, dot(n, uLightDir));
gl_FragColor = vec4(albedo * (0.2 + 0.9 * diffuse), 1.0);
}