As this part was a little more difficult to get right, I took the time to really comprehend the theory behind this part of the tutorial, because otherwise, I wouldn’t feel comfortably creating a tutorial about this.

With that out of the way, we can go into the last part of our water tutorial, with this, we can have surface simulation that it’s strong enough to be used on a game (given some adjustements to each use case).

Let’s go.

Previously:

The last part of our tutorial, we discussed about “rate of change”, derivatives and how we adjusted all of this to our central style.

What we did last time.

If you check our first tutorial, we stablished that our water material would only respond to a maximum of 3 water ripples, and will work very poorly when we add another, by erasing the previous behaviour, or by just ignoring. This is unnaceptable (if we wanted to do reactive water, we should go all the way). Luckily this is a solved problem and we will explain how to obtain a fully reactive water.

Part I – Reactive Water

First of all, we have to understand the theory behind the simulation that we will create. As creating a fully precise water simulation is out of the question (the overhead is too much), we can use the common approximation: The water is a network of springs all connected to each other.

For better understanding, you can read this two posts that go a little deeper into the issue: Paper on Water Simulation by Linear Convolution. This source is fairly advanced, but we will do our best to explain it, using Epic’s Sample as base for our implementation.

Also, in this forum post, they do a very simple implementation of the water simulation (although is done via CPU).

The water simulator (based on Epic’s version) has the following parts:

  • The Height Simulator
  • The “Brush”

Given that the “Brush” part is the easier, we will copy it here, but won’t go into much detail, it’s just a material that draws a circular splat (like a brush) on a certain position.

Force Splat Material.

The next one though, is far more complex and is the heart of the simulation: “The height simulator”.

The Euler Method:

For better understanding the height simulator, you must first understand the euler method for approximating values of an unknown curve: Wikipedia’s Article.

In simple terms, for really short delta times, you can get an approximation of the next value of a curve (or function) by getting the rate of change of that point and just adding it to the known position… this is like to say, if we’re on a car on the road… and you where completely blind, and wanted to draw the road and asked me to give you only our direction and velocity on that given point (for you to draw on the notebook); you could approximate the overall road more accurately if i was updating the info on a more regular basis.. like this:

Example of an Euler Approximation of the road.

The first line of this awesome and technically accurate drawing, shows what would happen if I were to update you fairly often on the given direction (a delta time low enough); the second shows what would happen if I were a dick and just update you when i remember it (most possible scenario), the delta time is too big for the simulation to be close enough. Remember, the speed is the tangent of the curve on that point (on this example).

The concept of “Convolution”:

I remember the days of where they teached me Convolution, it’s a really weird operator and it works by sliding the functions and calculating the resulting value, this was really useful for Filters and things like data processing, and convolution was made easy by some math magic (like the fourier transform, or the laplace transform).

Here in image processing, convoluting images is fairly common as an operation, and a good explanation is given on this page for example.

Epic’s new bloom method, uses convolution to add the kernel as a convoluted mask and the resulting output is the “fussion” of both pictures (by only convolving pixels with a given light threshold).

On the previous paper (the complex one about water simulation), the author uses convolution as a way to propagate the waves, using different kernels (like a cross, that we will use).

 

Starting our height simulator:

So first of all, keep in mind that we need buffers to store our previous frame’s data, we will use render targets to store the data of the height simulator, think of it as drawing to a texture that we will use later.

Let’s create our water simulator by starting with the “Kernel” of our Linear Convolution (a cross, so for each pixel, the convolution is the sum of the top, down, left and right neighbours, with the center of the kernel being zero):

Convolution by using a cross kernel (for faster processing)

With this we have the new height of our pixel, using only the convolution as the propagation method. Now, because we want to be able to modify the speed of the wave, we can “modify” the “velocity” of this delta by a factor. If we obtain the difference between the convolution (current height) and the previous height (value of the render target on that point) we can get the “velocity” or rate of change of that specific pixel, we then can multiply by a factor, to increase or decrease this velocity, and finally add the resulting speed to the previous height (render target data) by using the Euler Approximation as a mean to get the new position (remember, Euler says:  NextPosition = PreviousPosition + Velocity).

Using Euler Method to get the next position.

Keep in mind, that the convolution returns the value of 4 pixels, so when we get the velocity, we need to scale one of our inputs to be on the same level as the other (dividing the convolution result by 4, or multiplying the previous height by 4), we will use Epic’s method, because they do a smoothing later (that’s not really necessary, but the results are good.. so don’t break what it ain’t broken).

The result of this snippet of code, is the new position (given the euler approximation of the modified velocity), it’s later smoothed by getting another buffer (2 frames old). But this step is not 100% necessary… if you don’t use it though, do remember to normalize the pixel data (by dividing it by 4).

Smoothing and Damping.

As you can see, the result of the euler approximation is divided by 2, (giving the subtract input double the size of the frame – 2 input), by subtracting the buffer that’s 2 frames old, we reach the normalized value (and is smoothed on the process).

As that step is not really necessary, the last part of the code snippet does have something important, and that’s the dampening (value used to dimminish the ammount of energy on the simulation as time passes). This way, the waves can die after a while (values of 0 are waves that die immediately next frame, 1 are waves that will never die).

With this our water simulator kernel is ready, we only need to use it on the right place.

 

Creating the Water Volume:

Next, we needed to add the newly created water simulator to our water, we ended using a volume (contraty to every other guy out there using a 2d surface) because we wanted to add buoyancy to every object that’s on this volume.

As the overall class we created has a lot of features and it’s pretty big (it lets you create custom meshes for the water volume, let’s you register wave behaviours and even read the heightfield by accessing the Render Thread of Unreal) we will only concentrate on the simulation part here:

First of all, we want to define Dynamic Material Instances, so each water has their own material copy. We also created the render targets (for the height simulation) dynamically, so each water volume does not influence on others. This is easily attainable by calling:

Dynamic RenderTarget Creation.

Dynamic Material Instances.

Then, we obtain an array of the overlapping actors to our Water Volume and process their movement by adding the Brush material to the current RenderTarget, keep in mind that you would normally want to check if the overlapped object is close enough to the surface before doing this:

Using the brush to add ripples to the render target.

Finally, we just need to update the fluid simulation by using our newly created Material:

Updating the Fluid Kernel.

The results are the following:

 

Okay, Reactive water complete, next step: Buoyancy.

 

Part II – Buoyancy and Behaviours

The next step for creating our water volume, is implementing buoyancy to it (or the objects we want it to have buoyancy). For this we will concentrate our efforts on two fronts:

  • Common Actors (non character).
  • Character.

Why do we differentiate between this two type of actors? There are two reasons why: Because in general, we want our character to move in a not really realistic way (but that offers better control or gameplay value) and because the Character class of Unreal, already has a MovementComponent that’s the responsable of driving the movement, adding another component or behaviour that could collide with this is not a good idea.

First of all we will concentrate on common actors, as we want to create behaviour that can be reusable, we will use an ActorComponent.

The Buoyancy Component:

A lot of implementations on this type of components use an “Ocean Manager” that will give the values of height the buoyancy will have to use on their processing. We decided to do it different, and not use a Manager per se, but to allow each water volume to be able to manage and know all their height data. Buoyancy components are then, just responsable to do simple tasks:

  • Know which WaterVolume (if any) is responsable to drive their elevation.
  • Use some formula to modify the height of the Owner of the BuoyancyComponent.
  • Manage some events relevant to the owner.

As for the moment, the only thing the buoyancy component needs to know is: “What’s the WaterVolume that I need to use?” and “Where’s the water level right now?”. The first question can be answered by calling “per frame” a Trace to get the water volumes in the neighbour. The method name is “SearchWaterVolume”.

Search Water.

Do keep in mind, that this is a fairly simple search implementation, we could use Lambda Expressions to sort our array to get the closest water volume (by using the HitLocation as a driving sorting operator). For the moment though, it works.

After knowing and setting the water volume, we can do the buoyancy calculations, for this, we need to know the height of the wave at our location (x,y). We will just assume that our WaterVolume has a method called “GetHeightAtLocation(FVector location);” that we will show later. With this, we can calculate the buoyancy by two approximations:

  • The Snap approximation.
  • The Physics approximation.
The Snap Approximation:

Our first way of approximating our buoyancy, is to simply snap the actor to the water surfface (we can even give it a little sinusoidal movement to simulate dynamism). This really simple method is really useful for things like rafts that we really need to stay afloat no matter what.

Apply Buoyancy Method.

As you can see, if our Owner doesn’t have Physics Enabled or we don’t want it to use it regardless (bUsePhysicsIfAvailable) we just snap the actor to it. This helps us to do really crazy things like this.

Yup… a big wave

This “raft” is possible because it snaps to the surface (we need to obviously manage rotation, but that’s fine tunning), if we used a physics based approach, the results wouldn’t be as reliable as that (the draft could flip over, or be submerged for a while, the character would obviously start swimming, etc. Due to the nature of our game, we need the character to be able to solve certain puzzles, so everything that’s “part of the puzzle” needs to be absolutely reliable.

 

The Physics Approximation:

For the physics method, we will use the real buoyancy behaviour, for this, we need to first understand what’s the buoyant force: It’s the force, contrary to gravity, that a submerged object has, the formula is the following: ” Buoyancy = weight of displaced fluid.” (with an upward direction). Because the weight, can be thought as the Volume * Density * Gravity, and the volume is the same as the Immersed Volume of the object, we can rewrite the formula to be like this:

Formula

With density_fluid / density_object a relation that represents the typical Buoyancy Constant, it ranges from [0, inf] with 1 being the neutral buoyancy (that is, the objects have the same density and will not go up nor down, it will float in place).

Now, because we can know the mass of our object and the buoyancy is a constant (for the purposes of our component), we just need to know the “mass” (immersed) of the object, for this, we do a little trick: We create test points on our object, each one having equal share of the total weight of it, we can then procede to check if each point is or is not submerged. With this setup, the Buoyancy method will look like this:

 

Physics Buoyancy

As you can see, this method also detects when the object collides with the water surface, it’s really useful for generating waves or more “bizarre” things, it’s also a good place to apply something that most of the other implementations don’t take into account: “Surface Tension”, this can be simulated as a “wall” that collides with the object and robs it of part of its energy, that is, an impulse that works like a very strong friction (just decrease velocity to a factor of it).

With all this done, we just need to address the character. As Unreal’s Implementation of the Swimming Movement Mode relies on PhysicsVolumes, we needed to create a “copy” of this method, it’s not a really clean solution, but we just applied the PhysSwimming implementation and adjusted it to use our water volume… Done! the character swims.

 

Other things you can do:

As this is where each game will vary its implementation, you can do very awesome things with your water volume, for us, we created “WaveBehaviours” that work as the waves that modify the height of our shader. We created GerstnerBehaviour and ShockwaveBehaviour, both of this methods, have a CPU simulation and a GPU simulation (for the buoyancy and the shader respectively). As you register wave behaviours, the GetHeightAtLocation method of the WaterVolume calculates and adds every input to the resulting height (we even gave it the capability to use the heightfield as input too, but that’s a fairly advanced technique that requires knowing and managing multiple threads and the possible race conditions that could be born given a loose implementation).

GetHeightAtLocation

We added a boolean to enable/disable the heightfield buoyancy, as it’s a slow method that doesn’t add much to our use case.

 

Part III – Deferred Rendering and dealing with Unreal

EDIT: Thanks to “Arnage” for pointing out that we don’t actually need to duplicate the mesh anymore on newer UE4 versions, just check “allow custom depth writes” and do the depth check on the transluscent material! We are gonna update our project to do just that, thanks a lot!

Because Unreal uses DeferredRendering technique, better explained in this awesome article, it has some issues with the Transluscent materials, which aren’t rendered to the Depth Buffer, they have issues when dealing with depth. Unreal tries to solve this on a per object basis (contrary to a per vertex basis), so it draws the object in front (or behind) depending on the overall depth of the object. Now.. this is all good and dandy on normal objects, but transluscent objects that warp on themselves have the issue of the engine not knowing which pixel to show in front. This is the result of that:

Issue with inner triangles.

What’s happening here, is that the engine doesn’t know which pixel to draw first, because it doesn’t have any depth info (as translucent materials are not drawn to the depth buffer).

Now this is where I call out the MPVs, the guys at the Unreal forum and Tom Looman which describe easy ways to solve this issue on his article. Because we wanted the surface to be a transluscent material (even when the opacity is set to 1) due to our lineart effecets (discussed in part 2), we had to “fake” a multipass rendering (something unreal doesn’t have at the moment. We abuse the fact that the Opaque Materials are ALWAYS rendered first. The technique’s goal is to be able to get the Depth info of the surface, and for this, we do the following steps:

  • Duplicate our mesh (surface), with a opaque material.
  • Draw that material to the custom depth buffer.
  • Avoid this material to be rendered (on the main pass).
  • Use the custom depth info to cull the triangles that shouldn’t be displayed.

The technique is straight forward, and after dupplicating the mesh (and making sure the shaders and wave behaviours are synchronized), we set this static mesh to “Not render on the main pass” and “Render on custom depth”. This is the resulting buffer info:

Transluscent Depth – Rendered.

With this, we just need to use this info on our transluscent shader (which will be available, as it’s processed after the opaque materials), a simple modification of our shader will give us the result we want.

Culling Pixels.

By taking the custom depth and comparing it to the pixel depth of the transluscent object, we can check if this pixel should or should not be displayed. The results are as shower previously.

In case you forgot about it.

This technique can be extended to any transluscent object that has this issues, although dupplicating the meshes is a tedious job, it works. It would be better though, to have real multipass.

 

Final words

Well guys, it was a long journey, I tried to be as specific as i could, given the length of the tutorial series, and the time I had at hand. If you liked this, please show us your support by commenting here or adding us @CritFailStudio or CritialFailureStudio. As always, we’re listening to feedback of our game or tutorial requests. If you have any doubt, don’t hesitate to ask us here.

 

Cheers!

2 Responses

  1. Arnage dijo:

    Great write-up, one minor addition: you no longer have to duplicate your mesh manually to write to custom depth. Older engine versions didn’t allow translucent materials to write to it, so you needed the copy, but these days you can simply turn on “allow custom depth writes” and it will be handled internally, saving you the trouble of synchronizing two meshes.

    • HeavyBullets dijo:

      Awesome, i think it’s because i’m on an older engine version for our game (4.15) so i may have missed that detail.
      I will try to update to a newer version and modify the material pipeline for that. This is great news… will update the tutorial with your new info

      Thanks!

Agregar un comentario

Su dirección de correo no se hará público. Los campos requeridos están marcados *