Graphics Programming

Frustum Aligned Rendering Solution in VR with Unity’s Scriptable Rendering Pipeline – Part 4

This article is splitted to following parts. Click the title to redirect.

Implement details (Unity 2018.1.20f1)

As described in the previous article, several classes are used to implement an SRP in Unity. For the cluster render pipeline described above, three main tasks need to be implemented: draw active meshes from the load scenes, draw custom geometry and dispatch some compute shaders.

Draw meshes in loaded scenes

No matter placed by artists or instantiate from prefabs or resources at runtime, most of the objects we need to render are in loaded scenes. In a native API, we specify shaders, render states, vertex/index buffers, bind all the shader resources and then draw them. In Unity SRP, we do not need to concern about so many details.

Works we do not need to do:

  1. The vertex and index data manipulation;
  2. Shader assignment: Renderable objects must have a MeshRenderer component. The material and shader for rendering have been assigned in Editor;
  3. Binding of resources listed in the shader properties: The resources owned by a specific material are assigned in the material inspector in Editor.

Works we still need to do:

  1. Binding of resources not listed in the shader properties: Some global resources (eg. custom camera parameters) are needed to be bound with CommandBuffer.SetGlobal***(…).
  2. Binding of default camera constant buffers: In the default render pipeline, there are a lot of shader variables shared by all shaders. Most of them are camera related variables like matrices, screen parameters, project parameters, etc. Two problems came here: do we need them now? how can we still use them? The first question depends on your own choice. As we can bind any parameters at any rendering stage by yourself now, we can completely ignore the usage of default parameters, compute and bind whatever we want with the custom code, and use corresponding variables in shaders, just like the official HDRP suggests. Still, we can leave the variables in shaders unchanged as the default shaders like the official LWRP suggests. If we want to use the variables in the default shaders as the LWRP does, the second question comes. Conveniently, the SRP has provided the  ScriptableRenderContext.SetupCameraProperties(…) function to binding all the default parameters to shaders. This function is really helpful when we only want to rewrite part of the shaders and leave shaders like UI, particles, etc. unchanged but working normally under the new render pipeline.
  3. Render targets operation: Render targets are also needed to be manipulated manually with CommandBuffer.***RenderTarget().

The rendering of in-scene objects is handled by the ScriptableRenderContext.DrawRenderers(…)[1] function. All the objects passing the culling system are candidates for geometry rendering. With this function, we can have some extra controls on how we want to render the scenes.

  1. Object level control: objects layer mask control that only renders objects those belong to the given layer mask.
  2. Material level control: shader pass control that renders objects with specific passes of their shaders and render queue control that renders objects whose materials belong to the given render queue range.
  3. Render state control: specify a custom render state which contains properties like depth test, stencil test, depth read/write, etc.
Figure 1. Set up render states, shader passes and render queue to draw visible renders in scene with ScriptableRenderContext.DrawRenderers(…)

The SRP regards render loop as a per camera behavior. So when it comes to shadows, normally we create a new camera for shadow rendering. Doing so in SRP will result in another render loop and cause problems as we described before[2]. Fortunately, SRP provides a native solution for this with ScriptableRenderContext.DrawShadows(…) function. With this function, we can easily render shadows with whatever pipeline state we want and without any additional render loops. This gives us a lot of flexibility to use whatever shadow map techniques we want. This function also makes use of the shadow caster culling module provided by the Umbra culling middleware in Unity to minimize the CPU time for shadow rendering.

Figure 2. Set up all the shadow paramters and render shadow map with ScriptableRenderContext.DrawShadows(…)

Rendering in-scene objects often requires mixed usage of CommandBuffer and ScriptableRenderContext. The later one is at a higher level of the rendering structure. So after some state assignment or resource binding behavior with CommandBuffer class, call ScriptableRenderContext.ExecuteCommandBuffer() to make sure the CommandBuffer operations are ready at the execution time of the DrawRenderers(…)/DrawShadows(…) functions.

When it comes to stereo rendering, we call ScriptableRenderContext.StartMultiEye()  and ScriptableRenderContext.StopMultiEye() as a pair at the beginning and end of a ScriptableRenderContext.DrawRenderers(…) call to make all stereo rendering techniques the Unity provide work correctly.

Draw custom meshes

Aside from in-scene objects, we can now draw custom meshes with CommandBuffer member functions as the Graphics class we use in the default render pipeline before. We can draw a custom mesh instanced or not with a specific material. Or we can directly draw a Renderer object with all rendering data packed together. Or to draw some procedural contents that come from other places.

In our case, the most common use of these drawing commands is drawing full-screen triangles for post-processing.

Figure 4. Draw a full screen triangle with CommandBuffer.DrawProcedural(…)

Dispatch compute shaders

Since DirectX 11, compute shaders entered our view and provided a more flexible way of doing computing tasks on GPU. As previous contents suggest, compute shaders can be executed on the graphics pipe in sync with geometry rendering tasks using CommandBuffer.DispatchCompute(…). Or they can be asynchronously executed on the compute pipe with CommandBuffer.CreateGPUFence(…)/CommandBuffer.WaitOnGPUFence(…) and ScriptableRenderContext.ExecuteCommandBufferAsync(…) to make use of the compute resources the graphics pipe can not make full use of. The CommandBuffer class also provide functions to achieve the async compute.

Figure 5. Set up GPU fences and the graphics pipe tasks
Figure 6. Async compute with ScriptableRenderContext.ExecuteCommandBufferAsync(…)

Post-processing

Unity provides a post-processing stack package that supports a lot of post effects like color grading, antialiasing, SSAO, etc. Along with these effects, the package also provides a volume parameters blending system. With this system, we can blend parameters with global or other volumes when we enter a volume trigger, making the use of different effects in different places very convenient.

Our goal is trying to let the art team use the post-processing stack as they do in the default pipeline. Based on the HDRP package and the Book of Dead demo project, most of the effects are achievable with SRP. Generally, there are two types of post effects. One is those only rely on color and depth buffer, like FXAA, color grading, depth of field, etc. The other is those need more screen information, like temporal AA (need velocity), screen space reflection (almost need all the PBR parameters), etc. No matter what method we choose to render, color and depth buffer are always there. Effects of the first type are easy to use. Just set bind the color and depth buffer to the pipeline before rendering the post effects. Effects of the second type need more than just color and depth. Either we need to choose an appropriate method to get those additional data (eg. G-Buffers in deferred rendering have all the information we need for screen space reflection), or we add additional graphics pass to the whole pipeline to get the data we need (eg. an addition velocity pass is needed for effects like TAA or motion blur).

Figure 7. Bing all the textures we need for post-processing and then render all the effects

Aside from those built-in effects in post-processing stack, we also have some custom effects like bleeding and heal for gameplay purpose. In the default pipeline, these were done by adding a script with MonoBehaviour.OnRenderImage(…) function to blit the source render target with the shading effects we want, which is not working under SRP. Fortunately, the post-processing stack can be extended with custom shaders by simply inheriting the PostProcessEffectSetting and PostProcessEffectRenderer classes. Problem solved!

Make use of the official core package

Along with LWRP and HDRP, Unity provides a CoreRP package. The CoreRP provides us with a bunch of general-purpose modules to avoid some repeating works by ourselves, like render targets management, basic shadow atlas, volume component like the official post-processing package, etc.

Figure 6. Use volume component class from the Core package add local and global custom settings

References


Leave a Reply

Your email address will not be published. Required fields are marked *