In our upcoming game Repeat we work with world, that seems to be infinitely looped for the player. This mechanic, as simple as it is, has become increasingly challenging to implement over time. One thing that was tricky are particles.
Particles, and some other objects that are not meshes (decals, lights, etc…), are implemented as simply as possible by calling Instantiate() on start of the game and offseting the objects to correct positions. This approach works fine for decals and lights. However the difference with particles is that they have their own state: the simulation.
At first we used Unity default particle system that runs on CPU and later we switched to the GPU one. This was before we even noticed something was wrong: the particle simulation jumps or blinks for a frame when player crosses seam of a loop. What now?
This issue was occuring because the player was looking at a different instance of the effect, that is the one in the “root cube”. The root cube is the initial level, that player never leaves. Only other objects travels across.
The naive way, somewhat hacky, was to try to switch the instances, so that their position, relative to the player stays the same. This approach would solve the problem of missing frame and it also could potentially solve the problem of different internal state (just visually as in that there would not be jumps, the particle instances would still have different state and would have been simulated separately). However after some hours of implementation it became clear: this was not the right path. It was a cluster *#$@ of conditions that would have to work in 6 directions.
What I’ve ideally wanted, is to implement particles in such a way, that instances in loops would share the state of the one in root cube. After discussion in our team, we decided to try it. And it worked better than expected.
I’ve considered implementing the animation as a pre-baked load & play data. After some thought I tried to implement the particle system behaviour in a simple update loop, just to see, how hard would it be. I’ve used just Debug.DrawLine just for illustration.
The loop was extremely easy to write. So much that I’ve even encapsulated it into a BurstCompile job, so it can simulate more particles and be less CPU intensive. The rendering was a little bit harder, because of custom implementation of transform. The quads of the particle are oriented at camera direction. Everytime I work with shaders in HDRP I somehow forget that I need to account for Camera-Relative rendering. However the result works really well. The simulation is very cheap and runs only once for the root cube. Then quads are constructed in the vertex shader, so that there is no mesh data.
The simulated particles are just drawn in different positions by setting transform to a MaterialPropertyBlock and calling DrawProcedural, which is ideal for this game.
Primarily, we definitely lose the ability to quickly iterate on the effect for non-technical colleagues. However this is not that much required for our team, because “nobody” was working with the new Unity’s VFX tools in depth anyhow.
It’s not like we are switching all the particles either. Currently only particles of wind that blows out of pipes are implemented this way, because only these had this issues in the first level. But if this happens in some later levels, I know I can just reimplement the effect this way.
For me personally, I liked writting the update loop of the particle effect in code, more than composing it by changing paramters of preexisting transforms.
On Ken, we're trying to figure out how the world works — through written conversations with depth and substance.