MSAA and the parameter buffer

In the PowerVR SGX architecture guide, it is mentioned that MSAA is implemented efficiently, because it is entirely handled on-chip, and only the downsampled framebuffer will be written back to the (system memory) framebuffer, thus avoiding the memory bandwidth impact it usually has.


However, how can that work when the pipeline has to be flushed and an intermediate render/buffer flush to the system memory buffer has to be performed during a render, e.g. due to a parameter buffer overflow, also mentioned in the architecture guide?

As I understand it, as soon as the parameter buffer is filled completely, everything in the buffer will be rendered to the framebuffer. With MSAA activated, that is a lossy operation, as the (4x bigger) tile buffer has to be downsampled to fit the framebuffer. How is a loss of quality avoided? I don’t understand how the remaining primitives could possibly be correctly rendered with antialiasing with only the downsampled version of the partially rendered frame in the color/depth/stencil buffers.

Will the driver automatically allocate a big enough color/depth/stencil buffer for MSAA (e.g. 4 times as big), and render the intermediate buffer contents to those buffers until the whole frame has been rendered completely, and downsample at that point? Or does something else happen?

Thanks!

On some platforms buffers will be allocated at surface initialization that are big enough for the MSAA sub-sample data when the parameter buffer is filled and a render is kicked. On others, these buffers will be allocated on the fly as they are needed. On some other platforms the MSAA sub-samples will resolved into a single pixel colour before the flush, which is a lossy operation.

You should contact the development support team for the particular platform you are targeting as the technique that is chosen for the parameter buffer flush cases is a customer specific balance between memory and visual quality of rendered images.

As mentioned in our documentation, the size of the parameter is designed to be high enough that the majority of applications will not hit the storage limit. In almost all cases where this limit is encountered, better CPU culling of drawn objects and mesh optimization to reduce polygon counts will allow the application to avoid the parameter buffer limit.


Thanks, that was exactly the clarification I was hoping for.

I'll try to find out which of the alternatives my platform (Android) uses.

lxgr2012-05-30 09:41:27