I’ve searched the net and haven’t found a definitive “best practice” so, I’ve come to this forum to seek some guidance.
I’ve been working on my own engine and have come up on a problem. In my engine I have a “Camera” class which is exposed to users of the engine. The camera class can also have on it any number of “PostProcessors” (such as turning everything grayscale, tilt shift, some weather effects, etc.). Conceptually, all works well. The API for the PostProcessor defines that the PostProcessors take, as input, a color and/or depth texture. These textures represent the color and/or depth that the camera has rendered.
My problem comes in the fact that sometimes the cameras are being rendered to FBOs which ARE NOT GUARANTEED to have a color texture. An example of this is when the cameras are rendering to the “default FBO” (which is constructed on iOS devices using EAGLContext’s renderbufferStorageFromDrawable function when passing in a EAGLLayer). The default FBO is, in most cases, rendering to a render buffer for the color attachment instead of to a texture.
So, for my PostProcessor API to behave properly in all cases (as well as supporting other features such as temporal motion blur), I need to do potentially one of two things as I can see it.
- I can have all cameras at all time render to different “default FBOPrime” which renders to a FBO which has a color and/or depth texture associated with it and then, at the end of every frame, bind the “default FBO” (the one with color render buffer), and screen blit "default FBOPrime"s color texture. I think this is problematic because of the performance. If there are no post processors on any camera then the camera’s will be rendering to textures that don’t need to exist. Since rendering to textures is slower than rendering to a render buffer (from my understanding) this becomes a performance hit when no post processors are present.
- I can copy the contents from the “default FBO” (with the render buffer color attachment) to a “scratch FBO” that has a color and depth texture when the camera has post processors. I only have to do this copy once per camera and only if the cameras have post processors attached to them. Not an ideal scenario, but I am having difficulty thinking of a different work around.
To do the copying to a texture, I was planning to use glCopyTexSubImage2D to perform this. From what I have read, due to the TBDR parallelization of the PowerVR chips, using glCopyTexSubImage2D is costly. Are there other alternatives to get the behavior I want (copy FBO color attachment to a texture) which would be less costly? Would the glCopyTexSubImage2D be any more costly than writing to a texture (instead of render buffer) and doing a full screen blit of the texture?
Any help would be greatly appreciated. I’m going to go forward with the glCopyTexSubImage2D to just get some numbers, but I’m open to better alternatives.