I am trying to calculate the GPU time each OpenGL context takes. I am trying to do this for omap5 SGX GPU. For this I am looking into the omapdrm-pvr kernel module.
I have done some exploration and it seems that the module sends a command queue to the GPU and tells the GPU to start working by giving it a kick command. When the GPU is done with the work it gives a MISR to the module telling that work is complete.
Now I can calculate the KICK and MISR to calculate the overall GPU load but how can I calculate it for individual OpenGL Contexts.
Is there any help I can get regarding this here as omap5 SGX driver follows the linux drm architecture.
The support we provide is at application level, so we will only be able to give you limited assistance with driver level work like this. It’s worth noting that OpenGL ES contexts do not map directly to GPU contexts. The driver serializes work before it is submitted, as the GPU has one rendering context per-PID.
Unless you have a specific reason for writing your own tool, I would recommend using our PVRTune profiler instead of modifying the driver kernel. PVRTune gives timing data per-PID and it can also capture the time spent processing API calls per OpenGL ES context on the CPU (this requires installing and configuring our PVRTrace libraries. Our docuementation explains the process).