This lesson explains what shaders are and how they are executed.
Shaders are small programs that tell your graphics card how to draw objects on the screen. They run on the Graphical Processing Unit or GPU.
GPUs are fast: your graphics card runs at least one shader on each pixel on the screen in the span of four to 33 milliseconds. A display with a resolution of 720p has nearly one million pixels. At 1080p, you have two million pixels, and with a 4K display, eight million pixels.
Depending on the user’s display settings and the power of their gaming machine of choice, GPUs run those programs in parallel in chunks of 32 pixels per core on the graphic card processor. Older cards have hundreds of cores, while newer ones come with thousands.
Shaders control the location, shape, and color of the pixels that make up an object on the screen. We use them to shade 3D objects with light information and textures, including casting shadows. We also leverage shaders for post-processing image effects like deforming or blurring the screen, adding glow, or making images look hand-drawn.
On a technical level, they are often simple programs that involve some math. They add, subtract, multiply values from pixels, and other coordinates. While they can seem daunting at first glance, and take some learning, they are both powerful and rewarding once you got the hang of the basics.
In this guide, you are going to learn how shaders work at a technical level. In the next lesson, we will talk about how shaders work in Godot specifically.
Any object in the scene goes through similar steps, regardless of the platform used to draw it. In short:
ℹ Fragments are often presented as pixels. It is crucial to know the distinction between the two: a fragment is the space the 3D object occupies inside a pixel. It does not always match pixels one-to-one.
See the Intro/ShaderPipelineIntro.tscn scene for a visual demonstration of drawing a cube, exaggerated.
Let’s break down the pipeline in greater detail, following the steps above.
First, the vertex shader runs on every vertex that makes up a 3D model. This shader calculates where each vertex is in space, the way it faces, and its orientation.
Once the vertex shader finishes, the resulting points flow through the graphics pipeline to determine what shape the object is when it’s projected onto the screen.
At this point, the computer tests which way shapes face and what the camera can see. Any shape facing in such a way it should be invisible is discarded to save rendering work. We call this process culling. By default, renderers cull faces that are looking away from the camera. Otherwise, the graphics card would render pixels that are not even visible on the screen.
With the superfluous shapes discarded, the graphics card flattens the remaining forms into a flat 2D image. This step is rasterization, the conversion of vector data into fragments.
A fragment shader runs on every fragment the object occupies. The fragment shader calculates the color of the drawing. In a rendering pipeline that goes straight from shader to screen, this is the final color. In a more complex pipeline, this information is instead put into textures that the graphics engine later blends.
Fragments are then tested. The graphics card removes the ones hidden from the view by other objects. It blends the fragments that should overlap together.
ℹ There are other shader types in more advanced pipelines:
Godot 3.2 supports neither of them.
ℹ There is a type of shader, compute, that works outside the rendering pipeline. These programs run all kinds of calculations on the graphics card. OpenCL and CUDA are two related technologies.
Shaders for video games have appeared in more and more prominence since 2001. Before them, drawing meant sending graphical data directly into video memory and timing it with the way the screen refreshed. You could do some cool techniques by timing when you drew certain lines, but those techniques happen in shaders, and are the standard way to draw in 3D.
What about 2D graphics? The pipeline is the same!
2D sprites are rectangles with images on them, laid down on a virtual floor. A camera that has no sense of perspective looks down at them. Using geometry allows you to manipulate 2D graphics at the vertex level to skew or deform their meshes.