WebGL Background Bump Mapping

Originally created in May 2012, revitalized in February 2023

What is it?

Background textures had always been fairly static things. I set out to increase their apparent depth. This project uses WebGL and tiling background images to produce a bump mapped effect.

The background bumps texture preloader
Diffuse Color

A repeating texture image and WebGL are used to create a real-time bump mapping effect. The component above is actually imported directly from my storybook!

Overview

The website’s background (0,0)x(viewportX,viewportY) is mapped to a square (-1,-1)x(1,1). Right about where your face is, a virtual light is pointed towards the screen. I use the Phong reflection model to do some simple shading. You provide any tiling background image, and your image is consulted for the texture of the surface.

The tiling textures used on this page were found at Subtle Patterns.

Graphics Model

The image is represented in the XY plane. The normal is simply (0,0,1) (pointing at you). This allows us to easily place the light somewhere in the (0,0,+z) area, and use the mouse to control the X and Y positions of the light. The Phong reflection model is used for shading. Since the viewport is defined to be between (-1,-1)x(1,1), we can easily reason about the position of the light in this space. (-1,-1) being the bottom right, (1,1) being the top right. An offset can be applied to the mouse’s modifier, allowing us to position the light off-screen but still have the mouse affect the shading, making the shading effect subtler. I try to push the mouse about 2 units away vertically/horizontally because I think that looks prettier.

Bump mapping

Forward differencing (via 3 texture lookups) in X (aka s) and Y (aka t) is performed on the provided pattern, and a magnitude is obtained. The step size can be controlled by delta_s and delta_t, and may need to be adjusted as the texture is scaled for better results. So, for example with the s axis, we’re looking at a pixel and the pixel 0.05 to the right, subtracting them, and using this as a vector to perturb the normal. This magnitude, modulated by a user specified scaling factor (bump_scale), is used to perturb the normals.

Currently, only the blue channel of this difference is consulted. At first it seems surprising that the result can be as good as it is when we ignore 66% of the color of the texture. I think this can be explained intuitively by realizing that the pixels of most patterns have some amount of RGB in every pixel. I.e., rarely does a red consist purely of only the red channel. We might be in trouble if someone created a pattern without color in the blue channel! The main contribution of color in the result is coming primarily from the diffuse color. The diffuse shading is 80% diffuse color (user-supplied RGB) plus 20% of the RGB of the actual texture image. This is done so that we don’t end up with lack of color when light shines parallel to lines in the texture (in these cases, the difference is nearly 0.0, and would mean no normal perturbation). This might not be necessary with a better differencing algorithm.

Viewport Adjustments

It is very rare that a browser will have a viewport aspect ratio of 1:1. The canvas is stretched to the browser viewport dimensions, but internally has a square of aspect 1:1. As such, we’d notice stretching on any textures due to this aspect ratio difference. So, on draw/resize, the texture coordinates are modulated by the browser’s viewport aspect ratio. This results in square textures displaying as they were intended.

WebGL and power-of-two textures

As I was experimenting with the textures, I noticed that a lot of the tiling textures that exist in the wild are not power-of-two (hereafter, POT) in dimension. Furthermore, almost none of the textures are perfectly square.

WebGL only currently supports POT pixel dimensions, so the original pattern image is redrawn using a temporary 2D canvas if it isn’t a POT. This is fine for getting the texture to simply display, but there are still rescaling problems (mapping an arbitrary rectangle into a perfect square squashes the texture). Ultimately, this is due to a deficiency in the simplified model I constructed, but it is easily circumvented: the texture coordinates are further modulated by the texture’s aspect ratio. This results in textures of any aspect ratios rendering properly.

Texture Coordinate Scaling

In the original version of this code, the texture scale would have to be adjusted depending on the input image. In this more recent revision, I’ve updated that so that 1.0 represents the original texture at 1:1 size, regardless of either texture size or canvas size. Scales above 1.0 will be blurry (as any image stretch will be).

Trials and tribulations

  • Security: If you try to use an image hosted on another server with WebGL, it won’t copy the image data into a texture, and worse, it doesn’t even give you any indication that this happened! I struggled with the texture code for some time before I realized everything was correct, but the origin of the image was not! Luckily, I had a memory from ~10 years ago of struggling with something similar.
  • image.onload doesn’t fire if the image is cached 😐. It took me a while to dig into that and figure out a nice solution, as most of my code was reacting to the image being loaded to kick the next phase of animation

Conclusion and reflections

It was a lot of fun to make this initially, nearly ten years ago. Less-so for the revitalization, but it has been satisfying fixing bugs in the original. When it was first made, as far as I know, it was the only application of bump mapping to website backgrounds in the world. It might still be? This isn’t a particularly amazing achievement in the grand scheme of computer graphics however.

The original code from 2012 was unusable due to bitrot and general sloppiness of the original code, but I was able to copy and paste large amounts of it and refactor some of it. It’s been interesting reflecting on the old code as I write the new. I wondered about the amount of effort being spent to continually rebuild software as things change. The web doesn’t really feel like a substrate where you can build something and then leave it for 50 years. Sometimes web APIs change from under my code and the code no longer works. An example of this is the web fullscreen API that I used to use for my <canvas> elements: this worked at one point and then embarassingly became an inert button years later.

It’s been nice thinking about the drawing model from the perspective of React. WebGL is very stateful and imperative, so there’s a lot of useRef keeping track of various WebGL objects. The original version would’ve been redrawing constantly. With React and useEffect, all the dependencies that factor into making a frame of this animation can be specified up front. In this implementation, when your mouse isn’t moving, there’s no drawing happening because none of the dependencies are changing. It’s nice to know exactly what will factor into a redraw.

React has overall been an interesting way to use WebGL. Very painful at times, especially when you have to work around hydration errors too, but when it works, it feels pretty nice! I looked into the size of three.js to try to have a nicer API rather than using the low-level WebGL internals. I saw the bundle size and thought, “Welp, I guess I’m suffering”. TypeScript has been somewhat helpful in avoiding some WebGL bugs.

Was it even worth it? I wondered this a lot throughout. I still don’t really know. I think this could make for a powerful hero banner animation, placed under some really large text, with the light programmatically orbiting. Actually using it as a background requires a pretty subtle bump scale.