HSR Confusion

I’m a little confused about hidden surface removal and the “normal” depth buffer (i.e. the one we explicitly create).

Say I have 5 boxes that I want to render.  The primitive information is passed into the ISP where it’s rasterized and then HSR is done.  Is the normal depth buffer ever actually queried during any of this?  Does the HSR only look at the primitive information it has been passed?




HSR doesn't necessarily use the depth buffer.

Case 1:
If you rendered 5 boxes and didn't attach a depth buffer to your framebuffer, then the boxes would be rendered in submission order. In this case, the ISP would use the tag buffer to keep track of the latest primitive that covers a given fragment within a tile to ensure that is the only fragment rendered (removed overdraw).

Case 2:
If you rendered 5 boxes with a depth buffer attached, then the fragment closest to the camera would be rendered. In this case, the ISP would use the depth buffer. If a fragment of a primitive being processed is closer to the camera than the current value in the depth buffer for that pixel, then the depth buffer would be updated with this fragment's depth value and the tag buffer would be updated to reference this fragment. If a fragment of a primitive being processed is further from the camera compared to the value in the depth buffer for that pixel, then it will be ignored. This process is repeated for all primitives within a tile until the ISP has constructed a complete tag buffer for the tile of all fragments that need to be rendered.

You may already be aware, but our SDK documentation (including our "SGX Architecture Guide for Developers" doc) can be found here. I've filed a bug against the document for the HSR explanation to be improved.



Before alpha test, how do we know that primitives are transparent? If you have something is translucent, just the distance to the camera does not rule it out? HSR will does not work in this scene?

Hi kris007,

Thanks for your message.

You need to know beforehand if a scene element needs to be rendered with alpha testing, you assume some primitives will require it. Usually the material used for the scene element will give you that information. Alpha testing is just a binary approach to transparency, i.e., if a texel has an alpha value of 0, then you discard the fragment. If it has an alpha value of 1, then you do not discard it. You have a good explanation here.

Regarding alpha blending, HSR only works for opaque geometry, so as soon as you activate blending you will lose the hardware performance advantage provided by HSR.

The recommended order in what to render your scene elements is opaque, alpha tested and then alpha blended, sorting from closer to further away from camera for the alpha blended scene elements.
We recommend to avoid the discard operation as much as possible, preferring to use alpha blending instead of alpha testing. You can read the PVR Performance recommendations document for more details.

Best regards and many thanks,
Alejandro