Raymarching with SDFs
Purpose
A scene doesn't have to be a pile of triangles — it can be a function that says "how far to the nearest surface from here?" — and rendering it is just: for each pixel, step forward by that distance, over and over, until you hit something.
Key insight
An SDF (signed distance function) takes a 3D point and returns a number: the distance to the nearest surface. Negative if you're inside it. A sphere's SDF is length(point - center) - radius. A cube's is a few lines of math. Combine shapes with min (union), max (intersection), -max (subtraction).
The rendering algorithm — called sphere tracing — is elegant. For each pixel, cast a ray from the camera. Evaluate the SDF at the ray's origin. It says "nearest surface is 2 units away." You know it's safe to jump 2 units forward. Evaluate again: "0.5 units." Jump. Evaluate: "0.02 units." Close enough — call it a hit, shade the surface, stop. If after many steps you haven't hit anything, call it a miss and draw the sky. That's the entire algorithm.
No meshes, no UVs, no triangles — just a function and a loop per pixel. The three-layer pipeline collapses: the "coordinate" is the moving 3D ray position, the "signal" is the distance the SDF returns, the "color" is computed after a hit from a normal derived from the SDF itself.
Break it
Pull max-steps down to 16. Near shapes still render correctly — their rays hit fast, well under the step budget. But edges and glancing rays develop phantom halos where the march gave up too early and marked the pixel as a miss. Teaches: raymarching is iterative, not closed-form. Quality costs steps; complex or glancing views cost more steps. That's why raymarched scenes have a performance budget tied to scene distance and shape intersection. Crank steps back up and the phantoms vanish.
Drop blend-softness to 0. The sphere and cube look glued together with a hard seam. Nudge up to 0.3 — a smooth neck of molten metal grows between them. Push past 0.7 and they fuse into a single lumpy blob. The geometry itself reshapes from a slider — not the lighting, not the color, the actual surface. That's not possible with L24 (you'd reauthor the mesh) and impossible with L26 (flat quad). This is the "why use raymarching" answer.
Direct Claude
// FRAGMENT STAGE — the whole pipeline collapses into this one loop.
// The "coordinate" is the ray position p as it marches; the "signal" is
// the distance sdf(p) returns; the "color" is derived from the SDF normal
// after the march hits something.
uniform float u_blend;
uniform float u_maxSteps;
// Sphere SDF: distance to a sphere of radius r at origin.
float sdSphere(vec3 p, float r) { return length(p) - r; }
// Cube (box) SDF: Iñigo Quílez's formula for an axis-aligned box.
float sdBox(vec3 p, vec3 b) {
vec3 q = abs(p) - b;
return length(max(q, 0.0)) + min(max(q.x, max(q.y, q.z)), 0.0);
}
// Smooth-min (Iñigo Quílez). k = blend-softness. k=0 → hard min.
// Returns a smoothly-interpolated distance so SDF surfaces merge like clay.
float smin(float a, float b, float k) {
float h = clamp(0.5 + 0.5 * (b - a) / max(k, 0.0001), 0.0, 1.0);
return mix(b, a, h) - k * h * (1.0 - h);
}
// Scene: a sphere and a cube, offset slightly so they overlap.
float scene(vec3 p) {
float s = sdSphere(p - vec3(-0.55, 0.0, 0.0), 0.8);
float c = sdBox(p - vec3(0.55, 0.0, 0.0), vec3(0.6));
return smin(s, c, u_blend);
}
// Normal from SDF gradient — finite differences around the hit point.
vec3 getNormal(vec3 p) {
vec2 e = vec2(0.001, 0.0);
return normalize(vec3(
scene(p + e.xyy) - scene(p - e.xyy),
scene(p + e.yxy) - scene(p - e.yxy),
scene(p + e.yyx) - scene(p - e.yyx)
));
}
void main() {
vec2 uv = (gl_FragCoord.xy - 0.5 * u_resolution) / min(u_resolution.x, u_resolution.y);
// Ray setup: camera at (0, 0.4, 3.2), looking at origin.
vec3 ro = vec3(0.0, 0.4, 3.2);
vec3 rd = normalize(vec3(uv, -1.4));
// Sphere-tracing loop: jump forward by the SDF each step.
float t = 0.0;
bool hit = false;
int steps = int(u_maxSteps);
for (int i = 0; i < 96; i++) {
if (i >= steps) break;
vec3 p = ro + rd * t;
float d = scene(p);
if (d < 0.001) { hit = true; break; }
t += d;
if (t > 12.0) break;
}
vec3 col = vec3(0.07, 0.06, 0.10); // miss = sky
if (hit) {
vec3 p = ro + rd * t;
vec3 n = getNormal(p);
vec3 lightDir = normalize(vec3(0.5, 0.9, 0.3));
float diff = max(0.0, dot(n, lightDir));
float rim = pow(1.0 - max(0.0, dot(n, -rd)), 2.0);
col = vec3(0.75, 0.55, 0.42) * (0.2 + 0.85 * diff) + vec3(0.4, 0.55, 0.9) * rim * 0.35;
}
gl_FragColor = vec4(col, 1.0);
}