We’re seeing a problem here with some recent shader optimizations where precision qualifiers on samplers (GLES 3.00) aren’t being stripped before being passed to the underlying GL implementation (where they are invalid; only types int and float are allowed).
NVidia GL drivers are permissive and allow this, but ATI GL drivers rightfully throw an error and fail the compile.
I’ll file a ticket for the fix, but is there a recommended work-around for this? Removing the precision statement alone isn’t sufficient.
For instance, this GLES-SL 3.00 shader:
[pre]#version 300 es
precision mediump float;
precision lowp sampler2DArray;
uniform sampler2DArray aTexture;
…[/pre]
is translated as this GL-SL shader:
[pre]#version 330 core
#define PVR_GL_FRAGMENT_PRECISION_HIGH
#define PVR_GL_ES 1
#define gl_MaxTextureImageUnits 8
#define gl_MaxFragmentUniformVectors 64
#define gl_MaxVertexUniformVectors 128
#define gl_MaxVaryingVectors 8
#define gl_MaxCombinedTextureImageUnits 8
#define PVR_highp
#define PVR_mediump
#define PVR_lowp
precision lowp sampler2DArray; // <------ THIS is a syntax error -----
uniform sampler2DArray aTexture;
…[/pre]