Author

Steve Trettel

Published

January 2026

1 Day 4: Simulation

Today we learn how shaders can read from images — and eventually, from themselves. This lets us simulate things that evolve over time: like the game of life, or these waves trapped in the Mandelbrot set.

Click to add ripples.

1.1 Textures

So far, our shaders have generated everything procedurally — every color computed from coordinates and math. But shaders can also read from images. In Shadertoy, we load images into channels that our shader can sample.

To load a texture:

  1. Click iChannel0 at the bottom of the editor
  2. Select the “Textures” tab
  3. Choose an image (try “Abstract 1” or any photograph)

Now iChannel0 holds your image, and you can read from it in your shader.

Displaying a Texture

The simplest thing: read the texture and display it.

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    vec2 uv = fragCoord / iResolution.xy;
    fragColor = texture(iChannel0, uv);
}

The texture function takes a channel and UV coordinates — coordinates normalized to the range [0, 1]. We compute uv by dividing pixel position by resolution: bottom-left is (0, 0), top-right is (1, 1).

Why normalized coordinates? They’re resolution-independent. A point at UV (0.5, 0.5) is always the center of the image, whether the image is 256×256 or 4096×2048.

NoteGLSL: texture vs texelFetch

GLSL gives us two ways to read from images:

  • texture(channel, uv) — Takes normalized UV coordinates [0, 1]. Interpolates between pixels for smooth results.
  • texelFetch(channel, ivec2(x, y), 0) — Takes exact integer pixel coordinates. Returns that precise pixel, no interpolation. The third argument is the mipmap level (use 0).

Use texture when you want smooth sampling at arbitrary positions. Use texelFetch when you need exact pixel values — like reading simulation state from a buffer.

Pixel Coordinates with texelFetch

We can also read using pixel coordinates directly:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    fragColor = texelFetch(iChannel0, ivec2(fragCoord), 0);
}

This reads the pixel at exactly fragCoord — no interpolation, no coordinate conversion. The result looks the same, but the texture won’t scale smoothly if the shader resolution differs from the image resolution.

1.2 Image Distortion

Here’s where it gets fun. The coordinates we sample don’t have to match where we’re writing. If we modify the UV coordinates before sampling, we distort the image.

Flipping

Flip horizontally by inverting the x coordinate:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    vec2 uv = fragCoord / iResolution.xy;
    uv.x = 1.0 - uv.x;  // Mirror horizontally
    fragColor = texture(iChannel0, uv);
}

Each pixel asks: “What color should I be?” And we answer: “Whatever color is on the opposite side of the image.”

Wavy Distortion

Add a sine wave to the coordinates:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    vec2 uv = fragCoord / iResolution.xy;
    
    // Offset x based on y position and time
    uv.x += 0.03 * sin(uv.y * 20.0 + iTime * 2.0);
    
    fragColor = texture(iChannel0, uv);
}

The distortion ripples through the image. Notice how texture smoothly interpolates — we’re sampling at positions between pixels, and it blends them together.

Swirl

Rotate coordinates around the center, with the rotation angle depending on distance:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    vec2 uv = fragCoord / iResolution.xy;
    vec2 center = vec2(0.5);
    
    vec2 offset = uv - center;
    float dist = length(offset);
    float angle = atan(offset.y, offset.x);
    
    // Rotate more near the center
    float swirl = 2.0 * exp(-dist * 3.0);
    angle += swirl;
    
    uv = center + dist * vec2(cos(angle), sin(angle));
    fragColor = texture(iChannel0, uv);
}

The center twists while the edges stay mostly fixed.

Magnifying Glass

Sample from a smaller region around the mouse to create a zoom effect:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    vec2 uv = fragCoord / iResolution.xy;
    vec2 mouse = iMouse.xy / iResolution.xy;
    
    float dist = length(uv - mouse);
    float radius = 0.15;
    
    if (dist < radius) {
        // Inside lens: sample closer to mouse position (zoom in)
        float zoom = 2.0;
        uv = mouse + (uv - mouse) / zoom;
    }
    
    fragColor = texture(iChannel0, uv);
}

Click and drag to move the lens. Inside the circle, we sample from a compressed region — making things appear larger.

TipThe Key Insight

Distortion shaders work by lying about coordinates. Each pixel asks the texture “what color is here?” — but “here” can be anywhere we want. Move “here” in interesting ways, and the image warps.

1.3 Buffers

Let’s make a painting program. When you click and drag, you leave a mark. Simple enough:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    float d = length(fragCoord - iMouse.xy);
    
    if (iMouse.z > 0.0 && d < 10.0) {
        fragColor = vec4(1.0);  // White brush
    } else {
        fragColor = vec4(0.0, 0.0, 0.0, 1.0);  // Black background
    }
}

Try it. You get a white dot that follows the mouse — but it doesn’t leave a trail. Each frame, we redraw from scratch: white near the mouse, black everywhere else. The shader has no memory.

What we want: pixels that stay white after we paint them. Each frame, we need to ask “was this pixel painted before?” But we can’t — last frame’s output is gone.

Unless we save it.

A buffer is a shader that writes to a stored image instead of the screen. That image persists between frames, and any shader — including the buffer itself — can read from it via iChannel0. Same sampling functions you just learned, but now pointing at your own previous output.

Think about what this means: the buffer reads last frame’s image, decides what to change, and writes the new image. Next frame, it reads that image. It’s a feedback loop — and feedback loops can remember.

Setting Up a Buffer

In Shadertoy, we get four buffers (A through D) plus the Image shader. To create a buffer that sees itself:

  1. Click the + tab next to “Image” and select “Buffer A”
  2. In Buffer A, click iChannel0 and select “Buffer A” from the Misc tab
  3. In Image, click iChannel0 and select “Buffer A”

Now Buffer A reads its own previous frame, and Image displays whatever Buffer A produces.

The Painting Program, Fixed

Buffer A:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    // Read what was here last frame
    vec4 prev = texelFetch(iChannel0, ivec2(fragCoord), 0);
    
    float d = length(fragCoord - iMouse.xy);
    
    if (iMouse.z > 0.0 && d < 10.0) {
        fragColor = vec4(1.0);  // Paint white
    } else {
        fragColor = prev;       // Keep what was there
    }
}

Image:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    fragColor = texelFetch(iChannel0, ivec2(fragCoord), 0);
}

Click and drag to draw. The marks stay.

Each frame, every pixel reads what it wrote last frame. Most pixels just write that value back — nothing changes. But pixels near the mouse write white instead. And next frame, they read that white and keep it.

We use texelFetch because we want exact pixel values, no interpolation. When preserving state across frames, precision matters.

Now let’s make the paint fade:

vec4 prev = texelFetch(iChannel0, ivec2(fragCoord), 0);
prev *= 0.99;  // Fade toward black

Each frame, every pixel dims slightly. Draw fast enough and you get bright trails; stop and they fade away.

TipThe Rewind Button

When working with buffers, the rewind button (⏮) resets iFrame to 0 and clears the buffer. If you modify your shader and the display looks wrong, hit rewind to start fresh.

1.4 Game of Life

Game of Life is a cellular automaton — a grid of cells that update simultaneously based on their neighbors. This is a natural fit for shaders: every pixel runs in parallel, reads from neighbors, writes a new value.

Each cell is either alive or dead, and its fate depends on its eight neighbors:

  1. Underpopulation: A live cell with fewer than 2 neighbors dies
  2. Survival: A live cell with 2 or 3 neighbors survives
  3. Overpopulation: A live cell with more than 3 neighbors dies
  4. Reproduction: A dead cell with exactly 3 neighbors becomes alive

Reading Neighbors

In the painting program, each pixel only looked at itself. Now we need the eight surrounding pixels:

ivec2 p = ivec2(fragCoord);

float self = texelFetch(iChannel0, p, 0).r;
float neighbors = 
    texelFetch(iChannel0, p + ivec2(-1, -1), 0).r +
    texelFetch(iChannel0, p + ivec2( 0, -1), 0).r +
    texelFetch(iChannel0, p + ivec2( 1, -1), 0).r +
    texelFetch(iChannel0, p + ivec2(-1,  0), 0).r +
    texelFetch(iChannel0, p + ivec2( 1,  0), 0).r +
    texelFetch(iChannel0, p + ivec2(-1,  1), 0).r +
    texelFetch(iChannel0, p + ivec2( 0,  1), 0).r +
    texelFetch(iChannel0, p + ivec2( 1,  1), 0).r;

Each cell stores 1.0 (alive) or 0.0 (dead), so the sum counts live neighbors.

Initialization

We need to seed the grid. On the first frame, we write random values instead of reading from the buffer:

float hash(vec2 p) {
    p = fract(p * vec2(234.34, 435.345));
    p += dot(p, p + 34.23);
    return fract(p.x * p.y);
}

// in mainImage:
if (iFrame == 0) {
    float random = hash(fragCoord);
    fragColor = vec4(step(0.5, random));
    return;
}

The Complete Shader

Buffer A:

float hash(vec2 p) {
    p = fract(p * vec2(234.34, 435.345));
    p += dot(p, p + 34.23);
    return fract(p.x * p.y);
}

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    if (iFrame == 0) {
        float random = hash(fragCoord);
        fragColor = vec4(step(0.5, random));
        return;
    }
    
    ivec2 p = ivec2(fragCoord);
    
    float self = texelFetch(iChannel0, p, 0).r;
    float neighbors = 
        texelFetch(iChannel0, p + ivec2(-1, -1), 0).r +
        texelFetch(iChannel0, p + ivec2( 0, -1), 0).r +
        texelFetch(iChannel0, p + ivec2( 1, -1), 0).r +
        texelFetch(iChannel0, p + ivec2(-1,  0), 0).r +
        texelFetch(iChannel0, p + ivec2( 1,  0), 0).r +
        texelFetch(iChannel0, p + ivec2(-1,  1), 0).r +
        texelFetch(iChannel0, p + ivec2( 0,  1), 0).r +
        texelFetch(iChannel0, p + ivec2( 1,  1), 0).r;
    
    float alive = 0.0;
    if (self == 1.0) {
        if (neighbors == 2.0 || neighbors == 3.0) {
            alive = 1.0;
        }
    } else {
        if (neighbors == 3.0) {
            alive = 1.0;
        }
    }
    
    fragColor = vec4(vec3(alive), 1.0);
}

Image:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    fragColor = texelFetch(iChannel0, ivec2(fragCoord), 0);
}

Buffer A does the simulation: read the previous generation, apply the rules, write the next generation. Image just displays whatever Buffer A computed.

1.5 The Wave Equation

The wave equation describes propagating disturbances:

\[\frac{\partial^2 u}{\partial t^2} = c^2 \Delta u\]

This is second-order in time, so we need two pieces of initial data: the displacement \(u\) and its velocity \(v = \frac{\partial u}{\partial t}\). We rewrite as a coupled first-order system:

\[\frac{\partial u}{\partial t} = v\] \[\frac{\partial v}{\partial t} = c^2 \Delta u\]

Displacement changes according to velocity; velocity changes according to the Laplacian of displacement.

Why Buffers?

Each timestep, we need to: 1. Read the current \(u\) and \(v\) at every grid point 2. Compute the Laplacian of \(u\) from neighboring values 3. Update both \(u\) and \(v\) 4. Write the new state

This is the same pattern as Game of Life — read current state, compute update from neighbors, write new state. The buffer holds our grid; each frame advances one timestep.

We need to store two numbers per pixel: \(u\) and \(v\). A vec4 has four channels (red, green, blue, alpha), so we put displacement in red and velocity in green.

The Discrete Laplacian

The Laplacian \(\Delta u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\) we approximate with finite differences:

\[\frac{\partial^2 u}{\partial x^2} \approx u_{i+1,j} - 2u_{i,j} + u_{i-1,j}\] \[\frac{\partial^2 u}{\partial y^2} \approx u_{i,j+1} - 2u_{i,j} + u_{i,j-1}\]

Adding these:

\[\Delta u \approx u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} - 4u_{i,j}\]

Time Stepping

We need to advance the system in time. Forward Euler is the obvious choice:

\[u^{n+1} = u^n + dt \cdot v^n\] \[v^{n+1} = v^n + dt \cdot c^2 \cdot \Delta u^n\]

But forward Euler is unstable for oscillatory systems — the wave will blow up. The problem is that both updates use old values, so energy drifts.

Symplectic Euler fixes this by updating velocity first, then using the new velocity to update position:

\[v^{n+1} = v^n + dt \cdot c^2 \cdot \Delta u^n\] \[u^{n+1} = u^n + dt \cdot v^{n+1}\]

This preserves the symplectic structure of the Hamiltonian system and conserves energy over time.

In shader terms: each frame, each pixel reads its current \(u\) and \(v\) from the red and green channels. It samples its four neighbors to compute the Laplacian of \(u\). Then it updates velocity, then uses that new velocity to update displacement, and writes both back.

The Complete Shader

Buffer A:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    // Initialize: Gaussian bump, zero velocity
    if (iFrame == 0) {
        vec2 center = iResolution.xy * 0.5;
        float d = length(fragCoord - center);
        float u = 3.0 * exp(-d * d / 100.0);
        fragColor = vec4(u, 0.0, 0.0, 1.0);
        return;
    }
    
    ivec2 p = ivec2(fragCoord);
    
    // Read current state
    float u = texelFetch(iChannel0, p, 0).r;
    float v = texelFetch(iChannel0, p, 0).g;
    
    // Laplacian of displacement
    float u_n = texelFetch(iChannel0, p + ivec2( 0,  1), 0).r;
    float u_s = texelFetch(iChannel0, p + ivec2( 0, -1), 0).r;
    float u_e = texelFetch(iChannel0, p + ivec2( 1,  0), 0).r;
    float u_w = texelFetch(iChannel0, p + ivec2(-1,  0), 0).r;
    float laplacian = u_n + u_s + u_e + u_w - 4.0 * u;
    
    // Symplectic Euler: update velocity first, then position
    float dt = 0.3;
    float c = 1.0;
    float newV = v + dt * c * c * laplacian;
    float newU = u + dt * newV;
    
    fragColor = vec4(newU, newV, 0.0, 1.0);
}

Image:

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    float u = texelFetch(iChannel0, ivec2(fragCoord), 0).r;
    u *= 3.0;
    
    vec3 color;
    if (u > 0.0) {
        color = mix(vec3(0.0), vec3(1.0, 0.5, 0.0), u);
    } else {
        color = mix(vec3(0.0), vec3(0.0, 0.3, 1.0), -u);
    }
    
    fragColor = vec4(color, 1.0);
}
WarningStability

Try changing dt to 1.0 and hitting rewind. The simulation explodes.

Even symplectic Euler has a stability limit. The CFL condition requires:

\[dt < \frac{h}{c}\]

where \(h\) is the grid spacing (1 pixel for us) and \(c\) is the wave speed. With \(c = 1\), we need \(dt < 1\). Our choice \(dt = 0.3\) is safe.

If your simulation explodes, try smaller \(dt\).

Adding Ripples

We can add ripples interactively by giving a velocity kick where the mouse clicks:

if (iMouse.z > 0.0) {
    float d = length(fragCoord - iMouse.xy);
    float sigma = 10.0;
    newV += 0.01 * exp(-d * d / (2.0 * sigma * sigma));
}

1.6 Boundary Geometry

Run the wave shader and watch what happens at the edges. The waves reflect — but we didn’t write any reflection code.

Recall from PDE theory: the wave equation on a bounded domain needs boundary conditions. Dirichlet conditions fix the value on the boundary (typically zero). Neumann conditions fix the derivative (typically zero flux). Waves reflect off both, but differently — Dirichlet boundaries invert the wave, Neumann boundaries don’t.

What boundary conditions does our shader have? When we sample outside the buffer, texelFetch returns zero. So pixels at the edge see some neighbors fixed at zero — we’re implicitly enforcing Dirichlet conditions. That’s why the waves reflect.

This suggests an idea: we can impose Dirichlet conditions on any boundary we like by forcing the displacement to zero there. Want waves inside a circle? Set everything outside the circle to zero each frame.

Let’s try it. In Buffer A, we add a function to check whether a pixel is inside our domain, and force outside pixels to zero:

bool inDomain(vec2 fragCoord, vec2 resolution) {
    vec2 center = resolution * 0.5;
    float radius = min(resolution.x, resolution.y) * 0.4;
    return length(fragCoord - center) < radius;
}

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    // ... initialization ...
    
    if (!inDomain(fragCoord, iResolution.xy)) {
        fragColor = vec4(0.0);
        return;
    }
    
    // ... rest of simulation ...
}

The waves now reflect off the circular boundary. But we also want the Image shader to color the outside differently. We need inDomain there too, so we copy it:

Image:

bool inDomain(vec2 fragCoord, vec2 resolution) {
    vec2 center = resolution * 0.5;
    float radius = min(resolution.x, resolution.y) * 0.4;
    return length(fragCoord - center) < radius;
}

void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
    if (!inDomain(fragCoord, iResolution.xy)) {
        fragColor = vec4(0.1, 0.1, 0.1, 1.0);
        return;
    }
    // ... rest of coloring code
}

This works, but now we have the same function in two places. If we change the domain, we have to edit both — and if they get out of sync, strange things happen.

Shadertoy’s Common tab fixes this. Code in Common is prepended to every shader in your project. Put inDomain there, delete it from Buffer A and Image, and both can use the shared version.

Mandelbrot Boundary

Any domain works. Here’s the Mandelbrot set — a point is inside if its orbit doesn’t escape:

Common:

bool inDomain(vec2 fragCoord, vec2 resolution) {
    vec2 center = resolution * 0.5;
    float scale = min(resolution.x, resolution.y) * 0.3;
    vec2 c = (fragCoord - center) / scale;
    c.x -= 0.5;
    
    vec2 z = vec2(0.0);
    for (int i = 0; i < 100; i++) {
        z = vec2(z.x*z.x - z.y*z.y, 2.0*z.x*z.y) + c;
        if (dot(z, z) > 4.0) return false;
    }
    return true;
}