Can I easily interpolate between animations?

Hi,



I’m using the latest release of the SDK and am trying to interpolate between animations. I’m having some difficulty and hoping someone might be able to shed some light. For instance, lets assume I have two CPVRTModelPOD objects A and B (same model, different animations), and want to render the resultant model of A at frame X and B at frame Y. I think this can be accomplished by interpolating the bone matrices intelligently.



The basic code to pull the bone world matrix is below (from the samples):



PVRTMat4 amBoneWorld = m_model.GetBoneWorldMatrix(Node, m_model.pNode[i32NodeID]);



This is great, but the result is a transformation that does not seem to be “interpolatable” - meaning, I think it needs to be decomposed rather expensively in to each of its parts to work between it and another animation(s). Simply slerping the resultant matrix->quat and lerping the matrix->translation and then putting them back in a matrix doesn’t work well as there is significant scaling I’m missing (although perhaps this should work…?).



I’m now trying to pull out the scale/location/rotation matrices at each step that are done within but find myself reinventing the wheel.



I’m just checking this forum to see if anyone thinks this should be a simple task, or if I should be able to do it myself, or if perhaps its something that could be added to the APIs. Any thoughts are much appreciated!



Bob

hello,



The great POD spec is missing api for quaternion interpollation and ,

maybe i didnt understood your post well but interpolation between two rotations are independant of scaling no?



dgu

My issue is that currently the API i’m using to get the complete position+orientation+scale for the bone is this:



PVRTMat4 amBoneWorld = m_model.GetBoneWorldMatrix(Node, m_model.pNode[i32NodeID]);



This is a 4x4 matrix that is not something I can easily interpolate between. The APIs have lots of Lerp and Slerp APIs built in to slerp quat’s and to lerp location vectors, but this bone above is the result of combining the position and orientation of the given bone and that of all its parents. I haven’t found a simple way of simply decomposing this 4x4 matrix into its parts in a way that is interpolatable because those 3 characteristics that make it up are all too deeply intertwined.



Basically - i’m hoping someone points me at something I’m missing :slight_smile: I can rewrite the above method to base it on the result of interpolation of 2 or more animations, I’m mostly want to double check to see if this exists already as it seems like something that should be a common request (ie to merge an animation of walking with that of falling).



Thanks!

Hi,



As you’ve found there isn’t currently support in the SDK for interpolating between two frames from different CPVRTModelPODs or for interpolating between matrices.



If your POD data was exported as pos,quaternion and scale you may be able to do your own implementation of GetBoneWorldMatrix that allows you to do the interpolating between the two frames from the different CPVRTModelPODs before they’re combined into their final matrix form.



If the POD data was exported as matrices then I believe the only way to do a good job at interpolating between the two is as you’ve said and to decompose, interpolate and put them back together again.



Thanks,



Scott

I’m pretty sure the POD data is the individual transformations - I’m assuming that because when I step in to the GetBoneWorldMatrix and it skips the cache it goes in to the individual methods to build the final transform.



I agree with your suggestion that I can build my own implementation. Probably the best way anyways as I could see some application logic might be necessary (ie perhaps you only interpolate rotation/position of the endpoint bones, and/or weight things differently depending on how close the bone is to the root). I’ll give this a shot and if reach any success I’ll update this thread when I get there.



Thanks a lot for the response, I appreciate it!

Bob

did you see the " how to use matrix palettes to animated a skinned character" in the sample demo ?

where the mBoneWorld could be the interpolated one ?

// Multiply the bone’s world matrix by the view matrix to put it in view space

PVRTMatrixMultiply(mBoneWorld, mBoneWorld, m_mView);



david

You can certainly blend POD based animations - about 3 years ago I wrote a reasonably featured implementation for blending POD based animations.



Obviously, you want your files exported with Vec3/Quaternion transforms - I didn’t even bother with blending anything else… that’s what Vec3/Quaternions are for in the first place.



As I remember there were a few gotchas but once you get your hands on the data channels ( which is easy , PODs are reasonably well documented) it should be pretty easy- you get transforms and you get timing information and that’s all you need.



You don’t want to transform anything, blend in the original object space - think if it as creating another ‘target’ channel which contains Vec3//Quaternions and blending your original POD channels into that channel and then only , once you have your final animation using that "target’ channel to apply it to the object.


@dgu - I think so, are you referring to the “Chameleon Man” demo? I based all of my code on that. I use the bone indexes and the bone weights, pass them to the shader, then change the frame as I do my static animations. All that works great. I also found what looks to be an older demo using the GL extension GL_MATRIX_PALETTE_OES but that doesn’t even use shaders and I don’t have a great understanding of what its doing… so I’m ignoring that. Let me know if I should look further in to that, or if you think I’m on track.



@warmi - yes, this makes good sense. I’ve implemented a basic version already and I think it is working well. I basically reimplemented a few of the internal calls and pass in the “model1” and “model2” which is basically frame1 and frame2, and build everything up incrementally. For instance, the methods recurse until they get to a root bone, then interpolate that bone. The result of that will then be used for subsequent bones, all the way out to the leafs.



It seems to work well, but one misunderstanding I had was that the meshes as exported into POD (i’m using the native POD exporter from Blender) I assumed that the base mesh would be the same for all of my animations, and then I could blend between them. That doesn’t seem to be the case, the base mesh is basically the position of my character in frame 1. So if I export two PODs, one for “walking” and one for “falling”, if they don’t start from identical meshes, I cannot blend them. I think I can workaround this by always having the first frame be identical, but I’m assuming a lot there… or I could merge all the animations in to one gigantic POD export. That might be the smartest thing.



Any comments on any of this appreciated! And thanks to all of you for the responses!

The problem with PODs ( and the reason I eventually switched to another, custom format) is that they were designed to export individual scenes and as such they attempt to mimic a typical 3d app scene - they include models, cameras , materials and a single , master animation track ( which is what most 3d apps do)



This is ok for a demo app etc … but it is not really what folks generally use when exporting assets for games.

Generally, you would like to export your model as a separate entity which describes the underlying geometry and perhaps materials , followed by some sort of animation related file which can store the base rest pose and lastly , a collection of animation files which contain individual animations.

These animations are stored not as absolute animations ( as it is in the case of PODs) but rather as offsets from the rest pose and as such can be grouped around the base pose rather than the model itself - this way, you can have an animation library which is independent from models and as long as a model is compatible with a given skeleton file ( the rest pose) you can pretty much apply all animations from that library to it.

With this setup you can then easily support all sorts of scenarios - additive animations , animations shared across models etc…



You can do it all with PODs as well but you would need to start "hacking’ - i.e. designate one POD file as your basic model file with a single frame of animation which describes the rest pose and another set of POD files which contain animations for that model ( i.e. the only thing you load from them are their animation tracks )

Frankly, you are better off treating POD files as data repositories and used them only to load data from files and immediately copy their data into your own , custom objects which then can follow the format I described above - a separate class for models, animations , animation tracks etc … this requires a lof of custom code etc …

Hello



i am refering the the Chameleon Demo and matrix palette demo :

:Title:

MatrixPalette



:ShortDescription:

This training course demonstrates how to use matrix palettes to animated a skinned character.



:FullDescription:

This training course demonstrates how to use matrix palettes to animated a skinned character loaded from a .pod file.




it come with opengl es 1.1 activate during the installation but based on pod .





regards

david