Not all textures are for seeing, and you can use shaders and viewports to gather data that can be used elsewhere. Frequently used data textures include color gradients and noise textures. Data texture drives procedural content generation too.
In this tutorial, we explore how to build a mask we can use to drive the emission of 2D particles.
Looking through the particles material or properties of CPUParticles
, we find the Emission Shape. The default shape is Point which is a single spot where the particle emitter lives. The shape we’re interested in is Points, plural. This is a set of positions where any one position may spawn a particle.
These positions are encoded in a special texture for Particles2D
, or used in an array for CPUParticles2D
. Either way, we need some way to determine which pixels to light up.
The goal here is to make a mask during a dissolve effect and make it seem like the flames are coming out of where the sprite is dissolving. The easiest masks to make are black and white images. You may remember the dissolve shader if you followed previous tutorials:
shader_type canvas_item; uniform sampler2D dissolve_texture; uniform vec4 burn_color : hint_color = vec4(1); uniform float burn_size : hint_range(0, 2); uniform float dissolve_amount : hint_range(0, 1); uniform float emission_amount; void fragment() { vec4 out_color = texture(TEXTURE, UV); float sample = texture(dissolve_texture, UV).r; float emission_value = 1.0 - smoothstep(dissolve_amount, dissolve_amount + burn_size, sample); vec3 emission = burn_color.rgb * emission_value * emission_amount; COLOR = vec4(max(out_color.rgb, emission), smoothstep(dissolve_amount - burn_size, dissolve_amount, sample) * out_color.a); }
We don’t need emission or color information. Instead of outputting color, we want the emission to be white for burn_color
and black everywhere else.
Taking this into account, the code ends up being:
shader_type canvas_item; uniform sampler2D dissolve_texture; uniform float burn_size : hint_range(0, 2); uniform float dissolve_amount : hint_range(0, 1); void fragment() { vec4 out_color = texture(TEXTURE, UV); float sample = texture(dissolve_texture, UV).r; float emission_value = 1.0 - smoothstep(dissolve_amount, dissolve_amount + burn_size, sample); vec3 emission = vec3(1.0) * emission_value; COLOR = vec4(max(vec3(0.0), emission), smoothstep(dissolve_amount - burn_size, dissolve_amount, sample) * out_color.a); }
Where do we put this altered shader? Treat it as a pre-process shader, one that happens first. It should go in a viewport near the top of the scene tree, before the nodes that will use it. We can make a ViewportContainer
which holds a viewport, that contains the sprite we’re dissolving with this mask. Doing so turns it into a mask.
Note: Remember to set the self-modulate color of any viewport container you are using only for sizing purposes to have an alpha of 0.
Use a TextureRect
with a black pixel, or ColorRect
, to cover the background of the viewport. This ensures the resulting image is white on black instead of white on grey.
But we have a problem. To analyze this image, we have to iterate over every single pixel to get the data we want. For high definition resolutions, that’s a loop with 2 million iterations every physics frame! That’s an unacceptable amount of processing spent on one loop while the rest of the game waits. You could boost the performance with clever coding, threads or coroutines, but there’s a secret sauce:
Just do less work.
It’s a mask for a bunch of glowing fiery particles so it doesn’t need to be that accurate.
If we cut the resolution down 8 fold, we have an image that’s 240x135, which is 32,000 iterations. That’s 1.5% the pixel density of HD. Not even a blip on the radar of the profiler.
To scale the image down, add a second ViewportContainer
named ScaledView. Set the alpha of its Self modulate color to 0.0
and enable Stretch. Set Stretch shrink to 8
. This makes the resulting viewport’s width and height both 8 times smaller. Add a Viewport
, and under it add a TextureRect
.
Set its Right and Bottom anchors to 1
to take up the whole virtual space, and enable Expand. Its Texture should be a ViewportTexture
that points to the MaskView viewport.
The final consideration is to make sure that the Viewport’s Usage Mode is set to 2D, from its default of 3D. Color information is encoded differently between 3D and 2D scenes, and we want the final, baked color data to get particle information from.
This scaled view will be the viewport we analyze as the engine is scaling it for us without having to take up precious CPU cycles to do it in.