Graphics Programming

Frustum Aligned Rendering Solution in VR with Unity’s Scriptable Rendering Pipeline – Part 1

This article is splitted to following parts. Click the title to redirect.

What is Unity scriptable render pipeline (SRP)

What is a render pipeline? The render pipeline we talk about here is not the graphics pipeline on the GPU side[1]. It’s the pipeline to define the rendering behavior of a whole frame[2] rather than a single batch behavior the graphics pipeline[1] described.

When we want to render a frame with a native graphics API, D3D11 for example, first we need to create a D3D device, a swap chain, and an immediate context. Then we use the device to create all kinds of shader resources, the swap chain for render targets and use the immediate context to execute all kinds of rendering commands (or record the commands in command lists and then execute them).

The Unity SRP provides a more high level of abstraction and encapsulation of the native render pipeline. Generally, the following classes are involved in an SRP:

RenderPipeline/RenderPipelineAsset [3]

When implementing a renderer with native API, we often define a class called *Renderer. Inside this kind of a class, there are always functions called Init(), Update(), Render() and Quit()/Release() to do things just like the function names suggest.

The RenderPipeline class of Unity is this kind of a class. All custom render pipelines must be derived from this class and implement all the necessary functions. Following the regulation from Unity official sample render pipelines(LWRP and HDRP), the functions we need to implement are the constructor – do the initializing, Render() – do all the updates and actual rendering behaviors per frame per camera with the following two classes, and Dispose() – release all native resources and do whatever you want when quitting a render pipeline.

How to use this render pipeline after we implement all the functions? We need a RenderPipelineAsset which is a scriptable object. Implement all the functions that create the render pipeline we want to use, create an asset object and assign it to the graphics settings of your project. Voila, the render pipeline is online now.

Once we apply a RenderPipelineAsset, we need to handle almost all the rendering behavior by hand now, set up buffers, resize render targets, rendering shadow maps, etc. Though part of the default Unity renderer behaviors can still be used with simple new API calls, tons of work need to be done to recover the look of the default renderer. Like it says, with great flexibility comes great responsibility.

CullResults/ScriptableCullingParameters [3]

With these features, we can easily use the results of Unity built-in culling module. The results of culling include visible meshes, visible lights, and visible reflection probes. With this information, we can process visible lights and reflection probes as we want to achieve methods like GPU light culling without meaningless computing of invisible data.

Also, we can use custom ScriptableCullingParameters to control some of the culling process. For example, if we use a fully GPU lighting method, we actually don’t need the geometry-light intersection information during the culling process and with CullFlag.DisablePerObjectCulling parameter, we can save this computing time.

ScriptalbeRenderContext [3]

From here, the behavior similar to the native APIs appears. The SRP framework has hidden a lot of tedious details of native APIs from us. For example the creation of a graphics device and a swap chain, creation of all kinds of GPU resources, shader compiling, etc. These behaviors are pretty similar sometimes even the same for a certain family of graphics APIs. Even if among different platform APIs, these behaviors still do not vary a lot. So these are done automatically by the low-level SRP framework and we do not need to concern them at the application level.

When we organize a render pipeline in the Render() function of RenderPipeline class, this ScriptalbeRenderContext is pretty much like the ID3D11DeviceContext conceptually. It’s responsible for the execution of all commands. However, unlike the ID3D11DeviceContext that can directly execute all the commands related to drawing, computing and resource binding, the ScriptalbeRenderContext is designed at a higher level of indirection. The rendering commands we can directly execute through ScriptalbeRenderContext are only scene related commands which means that only the drawing related behaviors of objects that are active in currently loaded scenes are directly executed by this class:

  1. Rendering scene geometries with a specific pass and a specific render queue to render targets;
  2. Rendering the shadow map of a light;
  3. Rendering the skybox in the Light Settings;
  4. Set up camera related shader parameters those have the same name as the default render pipeline [4];
  5. Control the stereo rendering state.

All the other commands, like binding custom resources to shaders, changing render targets, dispatching compute shader, etc, are all firstly recorded in the CommandBuffer object and then executed or async executed by the ScriptalbeRenderContext.

At last, submit all the commands to GPU and complete the rendering.

CommandBuffer [3]

Though the name of this class seems to have nothing to do with context, functionally, the way to use it is almost the same as context class of native API like ID3D11DeviceContext.

Figure 1. Comparision of the Unity CommandBuffre class(left) and D3D11 DeviceContext class(right)

You can do almost everything with CommandBuffer exactly like how you do it with a native API. It is at this level, we can use the SRP almost as flexible as a native API. Render scene with whatever shader and render targets you want, choose the most appropriate way to process data on GPU, drawing custom geometry, binding custom resources to GPU, etc.

There are two main drawbacks during my usage of all the SRP features. One is we can’t get the geometry data of placed-in-scene meshes efficiently which prevents us from using techniques like GPU triangle culling. The other is that we can’t participate in the batch organization process of placed-in-scene meshes which is now automatically handled by Unity core. Apart from these problems, I feel no big difference between working with SRP and DX11 level native API.

References



Leave a Reply

Your email address will not be published. Required fields are marked *