Default viewport vs eglQuerySurface

I’m wondering how the default screen coordinates are determined in GLES2.

I built the GLES2 HelloTriangle example and it runs fine.  When I get viewport coordinates with  glGetIntegerv(GL_VIEWPORT,) or eglQuerySurface() I get 640 x 480 for an OMAP3530 EVM.  When I run the HelloTriangle example, the vertex coordinates are about -0.4 -> 0.4, but the triangle takes up 1/3 of the screen.  I noticed that the WINDOW_WIDTH #define in the OGLES2HelloTriangle_NullWS.cpp is unused.

If I add a scaling matrix to my vertex shader, I can get closer to the viewport dimensions I expect, but I’d like to know the right way to get the current coordinate system.  It seems redundant, but would a call to glViewport() with the dimensions returned from eglQuerySurface() do the trick?


The output of the vertex shader in OpenGL is not given in screen coordinates but in homogeneous clip space. This is essentially a cube ranging from -W to W in the X, Y and Z dimensions. After clipping, the coordinates get divided by W. Then the viewport transformation is performed to map the normalized device coordinates, ranging from -1 to 1, to the screen surface, which is 640x480 pixels in your case.

If you want to feed 2D pixel coordinates to your vertex shader, you need to divide those by half the screen pixel dimensions, then subtract 1. That way (0, 0) maps to (-1, -1) and (640, 480) maps to (1, 1). Note that in OpenGL (-1, -1) is the lower left corner of the screen.

attribute vec2 pixelCoords;
void main()
    gl_Position.xy = pixelCoords * vec2(1.0/320.0, 1.0/240.0) - 1; = vec2(0.0, 1.0);
Xmas2009-08-19 17:46:41