Textures

You may need to read the articles about uniforms and varyings before this one.


A texture is an array of data that can be randomly accessed. Textures are mostly used to store image data.

A sampler is a uniform that is used to sample a texture. The texture function is used to get the texel (texture element) from the given texture at the given texture space coordinates, which represent the texture in the range [0,1][0,1] from left to right, first pixel to last pixel (typically top to bottom). For example:

#version 300 es

precision highp float;

in vec2 v_texcoord;

uniform sampler2D u_texture;

out vec4 outColor;

void main() {
	outColor = texture(u_texture, v_texcoord);
}

A WebGLTexture can be created with createTexture. Like buffers, textures need to be bound to a binding point with bindTexture in order to be manipulated. The TEXTURE_2D binding point is used for two-dimensional textures, which can be supplied data with texImage2D. The following example creates a one-by-one pink placeholder texture, then loads an image into it.

const texture: WebGLTexture = gl.createTexture();

gl.bindTexture(gl.TEXTURE_2D, texture);

gl.texImage2D(
	gl.TEXTURE_2D,
	0,
	gl.RGBA,
	1,
	1,
	0,
	gl.RGBA,
	gl.UNSIGNED_BYTE,
	new Uint8Array([0xff, 0, 0xff, 0xff])
);

const image: HTMLImageElement = new Image();
image.addEventListener("load", () => {
	gl.bindTexture(gl.TEXTURE_2D, texture);
	gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
});
image.crossOrigin = "";
image.src = "https://www.lakuna.pw/images/webgl-example-texture.png";

To pass a texture to a sampler uniform, it must be bound to a texture unit that represents it. The active texture unit can be specified with activeTexture, after which point binding a texture to a binding point assigns it to that texture unit. The texture unit is passed to the shader program instead of the texture. For example:

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(imageUniformLocation, 0);

Check out the artist of the example texture on her website.

Texture Parameters

Textures have various parameters that can be modified with variations of texParameter[fi]. For example:

gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);

A texture's behavior when using texture coordinates outside of texture space can be modified by using TEXTURE_WRAP_S and TEXTURE_WRAP_T for the ss and tt axes, respectively.

  • REPEAT makes the texture repeat. This is the default functionality.
  • CLAMP_TO_EDGE repeats the last texel.
  • MIRRORED_REPEAT repeats the texture, but mirrors it for each repetition.

A mipmap is a collection of mips, which are smaller versions of a texture that are used to rasterize a texture at different sizes. The way that a mipmap is generated can be specified with the minification and magnification filter parameters. A minification filter is used when drawing anything smaller than the largest mip, and a magnification filter is used when drawing anything larger than the largest mip.

  • NEAREST chooses one pixel from the largest mip.
  • LINEAR chooses four pixels from the largest mip and blends them.
  • NEAREST_MIPMAP_NEAREST chooses the best mip, then picks one pixel from that mip.
  • LINEAR_MIPMAP_NEAREST chooses the best mip, then blends four pixels from that mip.
  • NEAREST_MIPMAP_LINEAR chooses the best two mips, then chooses one pixel from each and blends them.
  • LINEAR_MIPMAP_LINEAR chooses the best two mips, then chooses four pixels from each and blends them.

Since the magnification filter is only used when drawing larger than the largest mip, it can only be NEAREST or LINEAR.

In order for a texture to be rasterized, it must be texture complete, meaning that it either only reads the largest mip or has a complete mipmap. The easiest way to generate a complete mipmap is with generateMipmap. For example:

gl.generateMipmap(gl.TEXTURE_2D);

Texture Atlases

A texture atlas is a texture that contains multiple images. Using a texture atlas can simplify a shader and reduce the number of calls to rasterization functions. To use a texture atlas, pass texture coordinates that bound the desired image to whichever face the image should be displayed on.

Data Textures

Rather than loading texture data from an image, texture data can also be supplied directly. The way that the data is interpreted depends on the internal format of the texture. For example, setting the internal format to R8 causes the texture to expect one unsigned byte per texel, which it normalizes to color space. Each internal format has a corresponding format and data type that must also be specified. For example, with an internal format of R8, the format must be set to RED and the data type must be set to UNSIGNED_BYTE. For example:

const level = 0;
const internalFormat = gl.R8;
const width = 3;
const height = 2;
const border = 0;
const format = gl.RED;
const type = gl.UNSIGNED_BYTE;
const data = new Uint8Array([0x80, 0x40, 0x80, 0x00, 0xc0, 0x00]);
gl.texImage2D(
	gl.TEXTURE_2D,
	level,
	internalFormat,
	width,
	height,
	border,
	format,
	type,
	data
);

Textures will be malformed if their row alignment is not a multiple of their unpack alignment, which specifies the expected alignment of the rows in the supplied texture data. The unpack alignment defaults to four in order to maintain backwards compatibility, but it can be set to one, two, four, or eight.

For example, the texture above has a width of three pixels per row and uses one byte per pixel, meaning that each row is three bytes wide. Since three is not a multiple of two, four, or eight, the unpack alignment must be set to one with pixelStorei like this:

gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);

Projection Mapping

Projection mapping is the process of projecting an image. It can be accomplished by making another camera to act as a projector, then passing the projector's view projection matrix to the shader program so that the surface's position can be determined from the point of view of both the viewer and the projector. For example:

#version 300 es

in vec4 a_position;
in vec2 a_texcoord;

uniform mat4 u_world;
uniform mat4 u_viewerViewProjection;
uniform mat4 u_textureMatrix;

out vec2 v_texcoord;
out vec4 v_projectedTexcoord;

void main() {
	gl_Position = u_viewerViewProjection * u_world * a_position;
	v_texcoord = a_texcoord;
	v_projectedTexcoord = u_textureMatrix * u_world * a_position;
}

The texture matrix is a slightly modified version of the projector's view projection matrix that also converts from clip space to texture space. It can be constructed by multiplying a matrix that translates and then scales by 12\frac{1}{2} in each dimension by the projector's view projection matrix.

The fragment shader needs to determine which texture to use at each position based on whether the position falls within the frustum of the projector. For example:

#version 300 es

precision highp float;

in vec2 v_texcoord;
in vec4 v_projectedTexcoord;

uniform sampler2D u_texture;
uniform sampler2D u_projectedTexture;

out vec4 outColor;

void main() {
	vec2 projectedTexcoord =
		(v_projectedTexcoord.xyz / v_projectedTexcoord.w).xy;

	bool inRange = projectedTexcoord.x >= 0.0
		&& projectedTexcoord.x <= 1.0
		&& projectedTexcoord.y >= 0.0
		&& projectedTexcoord.y <= 1.0;

	vec4 projectedTextureColor =
		texture(u_projectedTexture, projectedTexcoord);

	vec4 textureColor = texture(u_texture, v_texcoord);

	outColor = inRange ? projectedTextureColor : textureColor;
}

Notice that v_projectorTexcoord.xyz is manually divided by v_projectorTexcoord.w to determine the projected texture coordinates because perspective is only applied automatically to gl_Position.

Also notice that rather than drawing the texture on top of existing fragments, texture projection works by checking if a fragment is within the projector's frustum and sampling from the correct texture based on that.

The frustum of the projector is shown in the example above by using the inverse of the projector's view projection matrix as the world matrix of a wireframe cube.


The next article is about framebuffers.