glCompressedTexImage2D problems






Hi!
I've some problems with glCompressedTexImage2D function:

when creating 16x16 texture with 8Bit palette

glCompressedTexImage2D(GL_TEXTURE_2D,0,GL_PALETTE8_R5_G6_B5_OES,16,16,0,256 * 2 + 16 * 16,m);

everything is ok, but when I use 16x 16 image with 4bit palette

glCompressedTexImage2D(GL_TEXTURE_2D,0,GL_PALETTE4_R5_G6_B5_OES,16,16,0,16 * 2 + 16 * 16, m);

where:
u8* m = (u8 *)malloc(64 * 64 * 4);

data_size = 2^PaletteBits * sizeof(u16) + width * height;



I've got GL error  GL_INVALID_VALUE (0x0501).

Second:

Why this code produces error when glTexSubImage2D() is invoked

  u8* m = (u8 *)allocCustomHeap(128 * 128 * 4);
  GLuint tex1, tex2, tex3;
 GLint glerr;
 
  glGenTextures(1, &tex1);
  glBindTexture(GL_TEXTURE_2D, tex1);
  glTexImage2D(GL_TEXTURE_2D, 0,  GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
  glerr = glGetError();

  glBindTexture(GL_TEXTURE_2D, tex1);
  glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
  glerr = glGetError();

  glGenTextures(1, &tex2);
  glBindTexture(GL_TEXTURE_2D, tex2);
  glTexImage2D(GL_TEXTURE_2D, 0,  GL_RGB, 32, 32, 0, GL_RGB,  GL_UNSIGNED_SHORT_5_6_5, m);
  glerr = glGetError();

  glBindTexture(GL_TEXTURE_2D, tex1);
  glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
  glerr = glGetError();


  glGenTextures(1, &tex3);
  glBindTexture(GL_TEXTURE_2D, tex3);
  glTexImage2D(GL_TEXTURE_2D, 0,  GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
  glerr = glGetError();

  glBindTexture(GL_TEXTURE_2D, tex1);
  glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
  glerr = glGetError();

but when all textures has same type (GL_RGB and GL_UNSINGED_BYTE) everything is ok
SDK bug??


please help, what Am I doing wrong?


regards
jirzynek
jirzynek2008-09-29 14:04:15
jirzynek wrote:
Code:
glCompressedTexImage2D(GL_TEXTURE_2D,0,GL_PALETTE4_R5_G6_B5_OES,16,16,0,16 * 2 + 16 * 16, m);

The size should be (16 * 2) + (16 * 16 / 2) since you only use 4 bits per pixel not 8.



What error do you get in the second case?



Regards,

Georg



I've got INVALID_ENUM.


check this code:

first case: no error
u8* m = (u8 *)allocCustomHeap(128 * 128 * 4);
GLuint tex1, tex2, tex3;
GLint glerr;

glGenTextures(1, &tex1);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();

// NO ERROR!!!
glBindTexture(GL_TEXTURE_2D, tex1);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


glGenTextures(1, &tex2);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB,
GL_UNSIGNED_SHORT_5_6_5, m);
glerr = glGetError();

glGenTextures(1, &tex3);
glBindTexture(GL_TEXTURE_2D, tex3);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


second case: error, I've only moved first glTexSubImage2D after glTexImage2D for second texture.

u8* m = (u8 *)allocCustomHeap(128 * 128 * 4);
GLuint tex1, tex2, tex3;
GLint glerr;

glGenTextures(1, &tex1);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


glGenTextures(1, &tex2);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, m);
glerr = glGetError();

// ERROR!!!
glBindTexture(GL_TEXTURE_2D, tex1);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();

glGenTextures(1, &tex3);
glBindTexture(GL_TEXTURE_2D, tex3);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


3rd case (NO ERROR): All textures has same type (GL_UNSINGED_BYTE
):
u8* m = (u8 *)allocCustomHeap(128 * 128 * 4);
GLuint tex1, tex2, tex3;
GLint glerr;

glGenTextures(1, &tex1);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


glGenTextures(1, &tex2);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();

// NO ERROR!!!
glBindTexture(GL_TEXTURE_2D, tex1);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 10, 14, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();

glGenTextures(1, &tex3);
glBindTexture(GL_TEXTURE_2D, tex3);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, m);
glerr = glGetError();


I think that glTexSubImage2D is using texture parameters not from currently bound
texture but from previously created texture.

Right now I'm using PC Emulation.

cheers
j.

jirzynek2008-09-30 18:16:17

We’ll have a closer look at this. By the way, have you tried it with our latest SDK (2.3)?





Regards,


Georg

Xmas wrote:
We'll have a closer look at this. By the way, have you tried it with our latest SDK (2.3)?


Yes, problem still exists...

cheers
j.

This issue should get looked at soon - apologies for the long delay.Gordon2009-04-02 10:37:12

This issue has been fixed for our next release, due this summer.