Imagination PowerVR SDK Blog

Cube Map Reflections


#1

Hi,
I’ve been studying the Vase example in the SDK that shows how to calculate a UV map for sphere reflection mapping. I’ve also looked at the equivalent in the Oolong. It’s a good example, but I’m looking to try and do a cube map version.

This is the relevant code:

    // Calculate matrix for environment mapping: simple multiply by 0.5
    for(i = 0; i < 16; ++i)
        EnvMapMatrix.f = VERTTYPEMUL(pNormalTx->f, f2vt(0.5f));

    unsigned char* pNormals = Mesh.pInterleaved + (size_t) Mesh.sNormals.pData;

    /* Calculate UVs for environment mapping /
    for(i = 0; i < Mesh.nNumVertex; ++i)
    {
        VERTTYPE pVTNormals = (VERTTYPE) pNormals;

        pUVs[2
i] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[0]) +
                                VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[4]) +
                                VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[8]) +
                                f2vt(0.5f);

        pUVs[2*i+1] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[1]) +
                                VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[5]) +
                                VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[9]) +
                                f2vt(0.5f);

        pNormals += Mesh.sNormals.nStride;
    }

The platform in question is the pre-3GS (MBX) iphone so the luxury of glTexGen is absent! I’ve studied this page:

http://www.opengl.org/wiki/Mathematics_of_glTexGen

but can’t easily map this code to how a cube map could work, given that S,T,R co-ordinates would require a 3D Texture. I also spotted your demo on this page:

https://www.imgtec.com/powervr/insider/demos/showroom.asp

which looks like it might be using cube maps. Is there any chance that you could release some sample code or a snippet that show how you did it?

Would be very grateful!

Cheers,
Donovan.


#2

I’m afraid that’s not possible, since MBX does not support cube maps.





Cube maps do use 3D texture coordinates, by the way (as a direction vector pointing to one of the six sides of the cube).


#3

Hi Xmas,

thanks for the reply. Just to get my head straight, the MBX doesn’t support GL_SPHERE_MAP either, but the sample code in the above post is simulating it by calculating a sphere-map-like UV co-ordinate per vertex. I assumed something similar had been done for that showroom sample.

I just want to get some nice reflection and specular effects on to my .pod models!

Any suggestions?

Cheers,
Donovan.


#4
donovanhide wrote:
Just to get my head straight, the MBX doesn't support GL_SPHERE_MAP either, but the sample code in the above post is simulating it by calculating a sphere-map-like UV co-ordinate per vertex.

Automatic texture coordinate generation from vertex position and normal data (glTexGen) is one thing, support for texture sampling from cube maps is quite another. The former, while not present in OpenGL ES, can be easily done on the CPU, unlike the latter.



So even if you were to calculate the coordinates on the CPU according to GL_REFLECTION_MAP, you could not use these coordinates to sample a cube map texture on MBX. You'll have to use planar textures.



Quote:
I assumed something similar had been done for that showroom sample.I just want to get some nice reflection and specular effects on to my .pod models!Any suggestions?Cheers,Donovan.


Why not give the sphere map approach a try?

#5

Hi Xmas,

thanks again for the reply. Didn’t realise how complicated the cube map sampling was! 6 different maps involved! I suppose it would be possible to come with a scheme for smapling from a square texture and layout the maps in a cross style:

0x00
xxxx
0x00

although the individual cube faces would have to be non-square. Not sure my maths is quite up to coming up with a formula though!

Have got sphere maps working for my model, but have an issue with stride alignment. I have bones exported in the pod as unsigned bytes, but this makes my stride equal to 41, which is fine on the iPhone simulator, but causes a crash on the iPhone itself. Is there away to pack the pod file, using the Maya exporter, to add an extra three bytes?

I tried adding a UVW channel with type byte and three ticks selected, but this didn’t seem to change the stride at all.

Cheers,
Donovan.



#6
donovanhide wrote:
I suppose it would be possible to come with a scheme for smapling from a square texture and layout the maps in a cross style:

That doesn't work because you'd have to "jump" to another location in the texture when you reach an unconnected edge.



Quote:
I have bones exported in the pod as unsigned bytes, but this makes my stride equal to 41, which is fine on the iPhone simulator, but causes a crash on the iPhone itself. Is there away to pack the pod file, using the Maya exporter, to add an extra three bytes?

Have you reported a bug to Apple? Unaligned data might reduce performance, but it certainly should not cause a crash. We have plans to add an option for aligning data to our exporters in the future.

#7

Hi Xmas,

I see you what you mean about the sampling at the edges. Looks like Cube Maps are iPhone 3GS only thenCry!

It's not an OpenGL crash just an EXC_BAD_ACCESS crash, probably related to the pointer math used in the Vase code sample:


        unsigned char* pNormals = Mesh.pInterleaved + (size_t) Mesh.sNormals.pData;
        /* Calculate UVs for environment mapping */

        for(j = 0; j < Mesh.nNumVertex; ++j)
        {
            VERTTYPE *pVTNormals = (VERTTYPE*) pNormals;
            
            m_pUVs[2*j] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[0]) +
            VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[4]) +
            VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[8]) +
            f2vt(0.5f);
           
            m_pUVs[2*j+1] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[1]) +
            VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[5]) +
            VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[9]) +
            f2vt(0.5f);
           
            pNormals += Mesh.sNormals.nStride;
        }
[/CODE]

Because Mesh.sNormals.nStride equals 41 pVTNormals[1], etc. fail on the second iteration. I know this is an ARM/Oolong/C/C++ related and not directly an Imagination problem, but seeing as the Maya POD exporter is missing this functionality, do you know how to reshape the interleaved data into a straight array of UVS, given a stride which is non-divisible by 4?

Cheers,
Donovan.






donovanhide2009-08-25 13:05:02[CODE]
        unsigned char* pNormals = Mesh.pInterleaved + (size_t) Mesh.sNormals.pData;
        /* Calculate UVs for environment mapping */

        for(j = 0; j < Mesh.nNumVertex; ++j)
        {
            VERTTYPE *pVTNormals = (VERTTYPE*) pNormals;
            
            m_pUVs[2*j] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[0]) +
            VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[4]) +
            VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[8]) +
            f2vt(0.5f);
           
            m_pUVs[2*j+1] =    VERTTYPEMUL(pVTNormals[0], EnvMapMatrix.f[1]) +
            VERTTYPEMUL(pVTNormals[1], EnvMapMatrix.f[5]) +
            VERTTYPEMUL(pVTNormals[2], EnvMapMatrix.f[9]) +
            f2vt(0.5f);
           
            pNormals += Mesh.sNormals.nStride;
        }
[/CODE]

Because Mesh.sNormals.nStride equals 41 pVTNormals[1], etc. fail on the second iteration. I know this is an ARM/Oolong/C/C++ related and not directly an Imagination problem, but seeing as the Maya POD exporter is missing this functionality, do you know how to reshape the interleaved data into a straight array of UVS, given a stride which is non-divisible by 4?

Cheers,
Donovan.






donovanhide2009-08-25 13:05:02

#8

An option for aligning data has been added to the POD exporters and Collada2POD so that this shouldn’t be an issue for the next release of the SDK.





When you tried to pad your data did you add a second set of UVs to your mesh or just have the 2nd set ticked in the exporter options? The exporter won’t output any values unless this channel actually exists in Maya. If you haven’t, then try this.





The only other way would be to post-process the POD data yourself before using it on the iPhone.