Bad performance on Samsung Galaxy

<SPAN style=“WIDOWS: 2; TEXT-TRANS: none; : rgb255,255,255; TEXT-INDENT: 0px; FONT: 12px arial, sans-serif; WHITE-SPACE: normal; ORPHANS: 2; LETTER-SPACING: normal; COLOR: rgb0,0,0; WORD-SPACING: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px” =Apple-style-span>Hi.

    I have been writing some OpenGL ES 2.0 code, running on both my Galaxy S (GT-I9000) and Asus transformer. The Galaxy exhibits a truly weird behaviour. When the phone is connected to the PC via USB, or if it is connected wirelessly using your SGXPerfServer I get smooth 30 fps (my programmed rate) performance. If the phone is disconnected it is approximately half this rate and also very jerky - seeming to speed up markedly for a frame or two every couple of seconds or so. There appears to be no appreciable garbage collection that might explain the effect. It appears not to be a configuration issue, as the behaviour switched between good and bad when the app is running, simply by plugging in and out the USB cable. The Asus transformer just runs the same irrespective of connection status. 

    I use purely Java, and a standard GLSurfaceView. The exact same behaviour is seen with test programs (nVidia, yours) which use Java merely to invoke standard OpenGL ES 2.0 code in a C++ library. 
 The effect is seen in a number of standard benchmark programs - as long as they are stressing the system - if the thing is running happily at 50 fps it is smooth. If it has a meaty render to do - say 10-20fps then it becomes fairly useless.
    On tracing in eclipse, I can see that the length of identical calls to glDrawArrays varies wildly - maybe this is the variability comes in. I seem also to be able to get the phone in a state where all of these calls become unusually lengthened. 


    I moved over from java 2d code to openGl to get better performance, which i have achieved. Unfortunately the jerkiness of the animations I produce means that my App actually looked better on the pure Java.


Any help/suggestions gratefully received!


Interesting finding - turning on Debug method tracing on a non-connected Galaxy doesn’t reduce the framerate noticeably but does (almost) remove the choppiness.

Choppiness on a connected Galaxy is actually present, but in a hugely reduced scale - it can be seen in traceview most obviously as occasional much faster frames - all calls, especially glDrawArrays are speeded up.

My guess would be power saving.

Plugged into USB == externally powered, power saving disabled.

“Meaty render” == larger render time, more CPU idle time

Debug build == more CPU work, less CPU idle time.

CPU idle time leads to power saving, via something like CPU being shutdown/downclocked.

Many thanks for your answer.

OK, USB -> no power-saving, perhaps. But why the good performance when running SGXPerfServer  - as long as it is actually connected? Does that and the (alas intermittent!) effect of call tracing not suggest that there is some form of multi-thread timing / synchronous running -type issue involved?

All builds are debug builds. The meaty renders I talk about are simply more of small renders being composited within the frame time - still a large-ish CPU load.
The effect on frame time and jerkiness is instantaneous -

run animation - plug in USB cable - nearly 2x frame rate, smooth,

unplug - slows down and becomes jerky with the occasional fast few frames.

The same thing with the SGXPerfServer  - run it, remain unconnected - program is slow and jerky - the instant i connect from PVRTuneDeveloper on PC - good performance.

Somehow power saving doesn't sound like the problem?




call tracing == less CPU idle time

SGXPerfServer communicating with PVRTune == less CPU idle time

If the Power saving scheme is ‘after idle for N time, save power’ then it will as you’ve found feel very random in terms of when the reduced perf happens, as the idle time can vary a tiny bit but hit the threshold and result in a large difference in perf.

The symptoms do fit the scenario I’ve outlined, but I may be guessing wrong.

Test: try creating a background thread that runs an infinite loop at the lowest CPU priority so that it burns all idle time. Your app should go full speed and power saving will never cut in (theoretically…). Even if this works I wouldn’t suggest leaving it in place unless you like short battery life, but it may help you know where to look for solutions.



You were completely correct - it is a cpu throttling problem. For some reason I thought that worked on longer timescales - seems it can change 50 times a second. And, in the absence of a thread to soak up the idle time, the profiling from eclipse/Android shows no gaps, suggesting that the system is working flat out. The more I tightened my code the worse, in a sense, I made the problem.


That's the good news, lol. The bad news is that it seems to have brought a load of SharedBufferStack lock timeout errors out of the woodwork. This basically stops the App for 1 or 2 seconds. It seems to be a fairly common problem - some developers reporting that it can even shut down phones. Some suggest that non-square textures are the problem, others that having an overlaid normal view/canvas is bad. The latter doesn't work for me, the former changes the timing quite noticeably so it may not have fixed it, merely pushed the problem into a different set of test conditions.  All of my textures are PO2. Any knowledge of this?


Since you are clearly on a roll Smile do you have any suggestions for what might cause tearing (I think that is the appropriate term - my OpenGL experience stretches back only a few months)? During animation i will get the odd frame where a random small piece is incorrectly rendered - like one part of my compositing has not happened,  leaving wrong colours. Where might I look?  I am using direct buffers everywhere except when doing glReadPixels. I found a terrible bug in Android 3.0+ where the direct allocation routine actually reserves 4 times as much memory as requested, thus blowing my App out of the water when I need to save rendered frames.


Sorry to have asked you another cpl of questions, but I have asked these things on a number of forums and not received any sensible replies. The port of my graphically intensive App to OpenGL is essentially done, but unusable unless I can nail down these issues.


Once again, thank you very much for your "throttling" answer. I guess too many years working on parallel architectures and the misleading traces led me to look for something too complicated, lol.






Pleased it helped!

calderwa wrote:

I guess this is some Android-specific issue, I can't help there.

calderwa wrote:
do you have any suggestions for what might cause tearing (I think that is the appropriate term - my OpenGL experience stretches back only a few months)? During animation i will get the odd frame where a random small piece is incorrectly rendered - like one part of my compositing has not happened,  leaving wrong colours. Where might I look? I am using direct buffers everywhere except when doing glReadPixels.

I'm not fully following your description. Can you be more precise or post a video on your website? (Would the problem show up by saving every rendered frame, or is it only on-screen?)

Also glReadPixels: are you only calling it occasionally to grab the result of a render to a file, or every frame? The frames on which you call it will be slow.




  An example of the effect I tried to describe:


Just above the right and left arrow buttons you can see two or three bad areas, with rectangular margins. Four fairly similar layers have been composited via repeated rendering to texture via a pair of Framebuffers using custom shaders. The effect looks rather like parts of the third layer did not get written into the then current framebuffer, but I could be wrong. You can certainly see a bright diagonal edge at the left which seems to correspond to the second layer.


This was seen in just one frame of a 100 frame render to file, where each frame is extracted from a framebuffer and saved to storage. The frames on either side were perfect.


The effect is seen also when simply animating the frames to the display with no glReadPixels being involved. The synchronization between the two main threads - the one advancing the state of the objects to be drawn and the OpenGL thread - is via semaphores. All objects are updated, a request is made for the rendering to be done and the main thread waits on a lock for the OPenGl thread to return from its rendering. The presence of glFLusj/glFinish has no effect on this as far as I can see. According to tracing !? the two phases are quite distinct, apart from what happens in GL after it "finishes" - the java framework seems to invoke the buffer swap functionailty at that point. I either get the problem a lot or hardly ever, depending I think to some extent on the number/nature of the objects to be drawn (but I have not discovered any magic formula to be able to produce it at will). All three objects above the bottom one used the same very simple "darken" shader. Each of these objects consisted of a triangle fan, with 256x4 x4 byte textures. This rendering loop had no "CPU stimulating" thread running.

An Eclipse trace (of a different example running the animation, rather than render, loop) can be seen at:


Any suggestions will be received gratefully.






  A bad frame captured in tracing:


It shows a bad frame then a good one. The good ones all looked very similar - the glDrawArrays calls broken in two with "idle" time. There were two bad frames in this run, and they exhibited similar behaviour - shortened frame time, much shortened glDrawArrays in a few of the write to texture compositing stages (but not the same ones - these are at start of frame, other was in middle of frame.) The bright purple spike in the animation advance thread is, I believe, a red herring - it wasn't noticeable in the few bad frames I have managed to capture so far and corersponds to a local check in the thread as to what type of animation/render/picture I am doing.

I am using OpenGL purely as a 2D drawing system - all of my points have identical Z values, and I do a glDisable( GL_DEPTH_TEST) on creating my renderer. I had a vague thought that the speeding up and drawing problems might be some kind of depth test thing.

I will try to rework my code to use VBOs and avoid glDrawArrays altogether.





Two observations:

1) I changed my code so that Framebuffer  0 was cleared to black at the start of a frame, and the alternate texture-backed framebuffer (doing render to texture, with Framebuffer swapping) cleared to red. The glitches I have described are always red - as though the render to texture had partially failed. The red area was always made up of  square blobs - at a guess maybe 25 pixels on a side. Is this something to do with the tiling operation of the GPU?

2) The hang that occurs and can cause the rebooting of phones due to locking in eglSwapBuffers must be well known as a bug to Google, though I see no acknowledgement of it as such - i noticed in the log file: "system is reboot by waitforcondition issue".


Any suggestions on this "tiling"? artefact?




calderwa2011-10-16 17:06:35