Graphics Programming

Volumetric fog/light rendering

Except the real-time rendering of the objects in the world with a physically based BRDF model, another important phenomenon in the real world is participating media. Of course, the object itself can be a participating media too. That’s why BRDF is gradually replaced by the new BSDF, which regards objects more of a media than a surface especially when we talk about subsurface scattering and refraction. Regardless of these objects those have relatively big body and actual solid surface that we can do the shading on, there’s another kind of media that forms a volume without an actual surface that also can interact with lights. This kind of media widely exists in our surrounding environment: smoke, fog, water, atmosphere molecule, dust, etc. There are consist with components those are so small that we can’t see them with a bare eye and too small to describe them with actual geometry. So we treat them as a whole media and try to process them in a macroscopic way.

Light behavior encountering a media

When we regard the objects as a more solid participating media, the light behaviors are actually pretty similar. When light reaches a solid surface, the light was separated to three parts: part of the light is absorbed by the object, part of the light is reflected, and part of light enters the object, scattered multiple times and then travel out of the object. The final color of the shaded pixel is actually determined by how much and which portion of light finally reaches human eyes or a camera.

This works similarly for volumetric effects. When light enters a volume of participating media, part of light was absorbed, and part of light scatters inside the volume. Without a solid surface, there’s actually no macroscopic reflection happening.

Figure 1. Absorption [1]
Figure 2. Out-scattering [1]
Figure 3. In-scattering [1]

Describe a volume

When rendering objects, we describe the surface with PBR parameters which represent how a surface interacts with light. Similarly, we describe a volume with some scattering parameters according to light-volume behavior above.  

Absorption: How much light was absorbed along certain distance along a light path.

Scattering: How light was scattered in all surrounding directions. For isotropic media, we can assume that think that when the light hit a media molecule, it scatters the light two all directions equally which is 1/4π.

Phase function: For an anisotropic media, we need a phase function to describe how much light was scattered in different directions.[1]

Emission: Not considering this now.

These are the parameters of physical meanings. For rendering colors to screen, absorption and scattering can be replaced by following two more intuitive parameters:

Albedo: Describe what wavelength of light was absorbed.

Extinction: No matter absorbed or scattered away from view direction, the light enters the volume lost part of its energy. We call the total light lost in view direction its extinction.

Finally, we choose Albedo, Extinction and Phase Function to describe a volume data because these meanings are more intuitive for our artists.

Render Volume Data

Divide the “lights”

Think about how we shading a surface. In the physical world, the final color of a surface point is determined by incoming irradiance and BRDF of the surface. The incoming irradiance is the integral of all lights reaching the surface in all directions inside the normal hemisphere of the surface. In real-time PBR pipeline, we can’t afford such complex computation. Then we divide the lighting into several parts and conquer them separately which are emissive, direct diffuse lighting, direct specular lighting, indirect diffuse lighting(lightmap, SH probes, etc), indirect specular lighting(reflection probes, ss reflection/refraction, etc), direct light occlusion (shadow) and indirect light occlusion (AO).

We do the same for the volume rendering. Apart from emissive (ignore it now), direct specular lighting (volume has no solid surfaces), we are trying to do the lighting separately (Figure 4).

Figure 4. Lighting computing of single scattering [2]

The process of the above image[2] shows the basic single scattering of a volume. Two parts of light are involved in this process: light from light source scattered into the camera through the volume, and indirect light from light source to scene surface then scattered into the camera through the volume. The former part corresponds to direct lighting and shadowing of the surface rendering. The latter part is the transmittance of the environment which is more like the indirect specular lighting part of the surface rendering.

Till now, the basic appearance of the volume should be clear. Indirect diffuse and indirect occlusion are still not considered yet. Similar to surface rendering, indirect diffuse can be simulated by adding a global probe or some local probes like an ambient color. Indirect occlusion may need a ray marching along surrounding directions to get the AO value of the media like what we do in SSAO.

Implementation

There are several methods that can have been developed to render the light scattering effects [2]:

Geometry approach: draw billboard or geometry that fits light type with a shader that calculates scattering like effects;

Screen space approach: ray march along view direction till scene depth by several steps, calculate scattering color at every step and accumulate them to get the final scattering color and extinction;

Volumetric approach[2][4]: calculate and store volumetric data and scattering result in frustum aligned 3D textures.

Screen space and volumetric can achieve a similar visual result. However, volumetric approach stores actual 3D scattering results. Even computing with low resolution, we can use jittering and temporal reprojection in 3D space to minimize the artifact. Also, the volumetric method has no dependence on geometry information and transparent objects can have coherent volumetric blend results with opaque objects.

Three frustum-aligned 3d textures are used to compute the final volumetric lighting result. One for volume properties and lighting. One store the result of the last frame for reprojection. And one for the final transmittance integration along the view direction.

Figure 5. Computational flow of volumetric lighting

Volume data evaluation pass (optional)

Store the volume parameters we use to describe the volume to the texture. Texels of the volume texture are actually in clip space. So transform it to world space and evaluate the texel value with the volume data parameters defined by artists. Albedo(RGB) and extinction(A) are stored in each texel value. This step is optional based on how complicated the volume data are described. Currently, fog volume can be simple height (only two values of media extinction at min and max height) and gradient height (use a gradient to describe more detailed albedo and extinction values between min and max height). Also, the media can have local media volumes with simple shapes. For simple height condition, we actually don’t need this pass because the evaluate is quite simple. For more complex volume description especially with local volumes, we use this pass to store the volume parameters to the volume texture. When we use this pass, we can also add a simple Perlin noise to the volume.

Lighting pass

In this pass, for every texel in the texture, we loop over all the lights affecting current texel and compute lighting results with direct shadow based on its albedo, extinction and a global phase function. And for indirect diffuse, we currently apply a simple ambient color to all texels to approximate it. Light absorption and out-scattering along light direction can be achieved by transparent shadow map methods like volumetric shadow map, etc (not used currently).

Reprojection pass

Given texture size and computation speed, a low resolution in screen space and limited discrete steps in z-direction were used when computing. Direct up-sampling can cause really obvious artifact. So we jittering the sampling position inside each texel in clip space and then do temporal reprojection every frame to reduce the low-res artifact. Though the artifact still can be viewed in certain angles (mostly high-frequency details), at most of the time it’s acceptable. If using a more soft shadow map as [4] described, results should be better.

Figure 6. Rendering without temporal reprojection
Artifact is very obvious and unacceptable
Figure 7. Rendering with temporal reprojection
The artifact still exists in high-frequency areas but acceptable

Accumulate pass

For final scene-volumetric blending, integration should be done along the view direction from near to far to get the color(RGB) and transmittance(A) at a certain depth.  With this result, we can simply fetch the blend color and transmittance by sampling the volume texture by its view space position.

Figure 8. Accumulate scattering and transmittance from near to far

float4 accumScatteringTransmittance = float4(0.0, 0.0, 0.0, 1.0);
for (uint textureDepth = 0; textureDepth < volumeDepth; ++textureDepth)
{
uint4 coord = uint4(DispatchThreadId.xy, textureDepth,0);
float4 scatteringExtinction  = g_ ScatteringExtinctionVolume.Load(coord);
const float transmittance = exp(-scatteringExtinction.a*stepLen);

accumScatteringTransmittance.rgb += scatteringExtinction.rgb*accumScatteringTransmittance.a;
accumScatteringTransmittance.a     *= transmittance;

g_FinalScatteringTransmittanceVolumeOut[coord.xyz] = accumScatteringTransmittance;
}

Code 1. Accumulation sample code[2]

Final blending

Either in a post process or in a object’s pixel shader, the view space position is pretty easy to get. Sample the accumulated volume texture and mix the color and the scene color with the transmittance value.

FinalColor = VolumeColor + SceneColor * Transmittence

Scene and scattering color mix equation
Figure 9. Volumetric lighting with a global height fog, two local fog volumes, one directional light, and four local lights.

Efficiency

With a precision of 1/16  resolution, 64 depth slices, and the lighting, volume data from the image above, the time consuming under PS4 +1080P is the table below.

Volumetric data params compute0.107ms
Lighting the volumetric data0.31ms
Temparol reprojection0.14ms
Accumulation0.126ms
Final blend0.3ms
Total time~1ms

Video Sample

Figure 10. Samples of volumetric effects applied in some real game footage

Future Improving

The most unconvincing visual problems now art indirect diffuse and volume self-shadowing. Indirect can be integrated with Unity light probes. Self-shadowing should be done with transparent shadow map skills or ray march the media against the light source. We’ll try to implement these two in the future.

For sun-like directional lights, the scattering color should be approximated with more accurate atmospheric scattering like the sky instead of constant color with phase function now.

References

  • [1] Physically based rendering, Second Edition
  • [2] Physically Based and Unified Volumetric Rendering in Frostbite
  • [3] The Comprehensive PBR Guide by Allegorithmic
  • [4] Volumetric fog: unified compute shader based solution to atmospheric solution

Leave a Reply

Your email address will not be published. Required fields are marked *