The PVRTexLib API has changed significantly from 2.09 -> 2.10. The documentation and headers do not make it clear whether or not, under the new API, encoding from uncompressed (raw RGBA8888, bytes in memory, say) to compressed output.
Can someone clarify?
The way to use uncompressed in PVRTexLib is the same as before with a few alterations. As before, you set up a CPVRTexture with the uncompressed data in RGBA8888 format, and then you pass it to the “Transcode” function with the required compression format. Section 4.2 in the documentation shows how to do this with a file path. If you have the data already, simply use a different texture construction method. The example also includes the “GenerateMIPMaps” function which is a pre-process method and can be ignored.
Actually, I think we are getting our brains wrapped around the changes a little better here—while we do not see correct output in all cases (under investigation), we are beginning to see some textures that look correct after changing to the new API.
Glad you’re figuring out the changes, it’s always a little difficult transitioning between interfaces unfortunately, so I’m expecting a few teething problems! If you have any more questions please feel free to ask. Equally if you find anything unclear or lacking in functionality, let me know.
Okay, what we are seeing here is that—if we supply a top-level raw RGBA8 image for compression as PVRTC1, only the top mip levels are valid in the output. The file sizes (as emitted by saveFileLegacyPVR), look correct.
What are the criteria for GenerateMIPMaps use? My suspicion is that it is not processing our input. If the number of mipmaps isn’t specified in the header before CPVRTexture creation, we see no additional mip levels. Yet calling GenerateMIPMaps seems to have no effect.
In the original API, the source texture had its own properties and then you specified a target header to ProcessRawPVR. I do not think I understand what the analog is here in the new form.
Could you share a code snippet so I can see what it is that you’re doing? Also are you checking what GenerateMIPMaps is returning? If it returns false then it’s not processing, it returns true then that’s a bit more worrying.
Most importantly, how are you specifying the texture’s format? The pre-processor functions will only work on uncompressed images in the format that PVRTexLib expects (RGBA 8 8 8 8, RGBA 16 16 16 16 or RGBA 32 32 32 32 - with pixel types of unsigned byte norm, unsigned short norm or unsigned integer norm respectively. 32 bit float is also valid.
…and just to be quite specific:
using namespace pvrtexture;
pvrHeader.u32Version = PVRTEX_CURR_IDENT;
pvrHeader.u32Flags = 0;
pvrHeader.u64PixelFormat = PVRStandard8PixelType.PixelTypeID;
pvrHeader.u32ColourSpace = ePVRTCSpacelRGB;
pvrHeader.u32ChannelType = ePVRTVarTypeUnsignedByte;
pvrHeader.u32Height = 1024;
pvrHeader.u32Width = 1024;
pvrHeader.u32Depth = 1;
pvrHeader.u32NumSurfaces = 1;
pvrHeader.u32NumFaces = 1;
pvrHeader.u32MIPMapCount = 1;
pvrHeader.u32MetaDataSize = 0;
CPVRTexture cTexture( pvrHeader, raw_rgba8_input_data_for_top_level_only );
Where the GenerateMIPMaps() call is returning false to us and we see a perfectly valid single-level .pvr file emitted via saveFileLegacyPVR().
The only way we appear to be able to get a PVR file to output with multiple levels is to specify the mip count up front before cTexture is created. But this seems to us as lying about the source input—we are supplying a single-level source and asking the library to generate mip levels for us.
Yes the problem is that you are using ePVRTVarTypeUnsignedByte. If you switch this to ePVRTVarTypeUnsignedByteNorm instead, this will work correctly. I will try to make this more clear in future, although I plan to allow any uncompressed formats to work in pre-processing at some point making this moot.
In the case of compressed formats (so far) the variable type makes no difference to how they’re compressed/read as they all define specific types within their codecs. Typically I consider unsigned byte norm to be the default though, as this is what they’re largely read back as.
Regardless, for consistency it probably should be unsigned byte norm.
I’ve also just noticed that the code misses an “e” off the front of the colour space enum, which is an error…
I’ll get the docs updated to fix these issues for the next release.