You have a mask, now you need particles. There’s just one more step to go: analyzing the mask to figure out where pixels are light enough to count.
Add a Particles2D
to the scene and add a new ParticlesMaterial
to it. Configure the particles to look and animate how you want. To get it to fire in time with the dissolve, attach a new script to the Particles2D
node.
Tip: If you want your particles to glow with a World Environment, set the modulate
color to a HDR color, not the color ramp.
The script on the Particles2D
needs an input for the mask. It will go off every physics frame, and the magic will occur inside of _process_texture
.
extends Particles2D export var emission_mask: Texture func _physics_process(_delta: float) -> void: _process_texture() func _process_texture() -> void: pass
Assign the new Emission mask property as a ViewportTexture
pointing to the ScaledView viewport. All the following code goes into _process_texture
- replacing the pass
statement.
The first step is to grab the raw data out of the mask. We can do this with get_data()
, which returns an Image
object that contains everything we need. The Image
object has a Dictionary
called data
, which contains the width, height, and pixel data.
We need to store a series Vector2
s, so we also set up an array for it.
var data := emission_mask.get_data().data var width: int = data.width var height: int = data.height var raw: PoolByteArray = data.data var positions := PoolVector2Array()
We iterate over the pixels in a 2 dimensional loop based on the width and height:
for x in range(width): for y in range(height):
To figure out where in the texture data is the information we’re after, we have to consider how the image spans out. Viewport textures are in RGBA8 format. That is, 8 bit (1 byte, [0..255] inclusive) integers for R, G, B and A, in that order, going across horizontally, then down vertically.
The mathematics to find a 1D index out of a 2D loop is y * width + x
. Remember that the data is in chunks of 4 integers for R, G, B and A. We need to multiply this value by 4 to get access to the first byte of any given color, which is the byte for R.
Exception: In GLES 2.0, by default, ViewportTextures have 3 bits (RGB) instead of 4, so multiply by 3 instead.
The image is in black and white, so we don’t care which of the three colors we use; but you can have more complex masks where the colors mean something. Keep that in mind.
var idx := (y * width + x) * 4 var byte: int = raw[idx]
To figure out if the pixel at this position is one we want to emit from, we check if it exceeds 128
. This is the 50% intensity threshold. We add this X and Y as a position if so.
Remember that we scaled our image down by a factor of 8. The Particles2D
could also not be on the origin even though the analyzed image has no ‘position’ data. We need to reverse all those transformations to get an accurate final pixel position.
if byte > 128: positions.append(Vector2(x, y) * 8 - position)
Once we’re out of the loop, we need to check whether we have any data. If no pixels are lit up, then skip it and turn off the particles system entirely.
if positions.size() == 0: emitting = false else: pass
We replace the pass
statement in the next section for when we do have data.
If we do have positions, now we need to encode them in a way that the GPU Particles2D
system can use. The Points shape uses a texture that Godot sends to the GPU with encoded positions. Its format is RGF, or two 32 bit (4 bytes) floating point numbers. In our case, it will be X and Y for each position we intend to send up.
But ImageTexture
, the texture type that allows to create data from scratch, uses a PoolByteArray
for its data. We need a way to convert floating point positions into bytes. That’s where the StreamPeerBuffer
object comes into play.
A data stream is a series of bytes that send complex data; it’s perfect for this, so we create one.
var buffer := StreamPeerBuffer.new()
We iterate across each of our positions, and call the put_float()
function to convert the X and Y bytes that make up our Vector2
into the buffer.
for pos in positions: buffer.put_float(pos.x) buffer.put_float(pos.y)
The GPU expects a texture with a particular size for the amount of data we use. This size is 2048 pixels in width, and a height equal to the number of positions we have as a proportion of 2048. We round up to avoid having 0 pixels.
var new_width := 2048 var new_height := (positions.size() / 2048) + 1
When creating an image out of raw data, you must declare all data, even if it’s 0. We have to resize the buffer, which will fill anything we haven’t yet assigned to 0s but give it the proper size. A floating point number in this case is 4 bytes, and we have two of them. The final size is the product of the width, height, and 8.
var output := buffer.data_array output.resize(new_width * new_height * 8)
All that’s left is to create the image. We use create_from_data
to be able to pass in our position data, and we assign it the RGF color format.
var image := Image.new() image.create_from_data(new_width, new_height, false, Image.FORMAT_RGF, output)
We can now create an ImageTexture
with the create_from_image
function.
var image_texture := ImageTexture.new() image_texture.create_from_image(image)
We assign it to the particles material, set the emission count, and turn on the system to finish the script.
process_material.emission_shape = ParticlesMaterial.EMISSION_SHAPE_POINTS process_material.emission_point_texture = image_texture process_material.emission_point_count = positions.size() emitting = true
Here is the script in full.
extends Particles2D export var emission_mask: Texture func _physics_process(_delta: float) -> void: _process_texture() func _process_texture() -> void: var data := emission_mask.get_data().data var width: int = data.width var height: int = data.height var raw: PoolByteArray = data.data var positions := PoolVector2Array() for x in range(width): for y in range(height): var idx := (y * width + x) * 4 var byte: int = raw[idx] if byte > 128: positions.append(Vector2(x, y) * 8 - position) if positions.size() == 0: emitting = false else: var buffer := StreamPeerBuffer.new() for pos in positions: buffer.put_float(pos.x) buffer.put_float(pos.y) var new_width := 2048 var new_height := (positions.size() / 2048) + 1 var output := buffer.data_array output.resize(new_width * new_height * 8) var image := Image.new() image.create_from_data(new_width, new_height, false, Image.FORMAT_RGF, output) var image_texture := ImageTexture.new() image_texture.create_from_image(image) process_material.emission_shape = ParticlesMaterial.EMISSION_SHAPE_POINTS process_material.emission_point_texture = image_texture process_material.emission_point_count = positions.size() emitting = true
Fun fact: This code is a simplified version of what the Godot editor does under the hood when you assign it a texture through the Particles menu.
If you are coding for an older or weaker device that uses GLES 2, you can use CPUParticles
instead. It also has EMISSION_SHAPE_POINTS
shape available, but it does not need a texture.
First, you would not subtract the position
from the X and Y, because the CPUParticles2D
does have a position, unlike a texture.
positions.append(Vector2(x, y) * 8)
It takes a PoolVector2Array
of positions, so you could replace the data encoding with a simple:
emission_shape = CPUParticles2D.EMISSION_SHAPE_POINTS emission_points = positions emitting = true