Position/size for shapes imported from files

Hi,


I’ve a file blender where I’ve defined an airplane. I need to use it in an iPhone application with opengl then I’m using the following process:


- I prepare the airplane in blender (2.49b)


- I export the airplane in collada 1.4 format


- I use the powervr’s collada2pod tool to transform the collada file in a pod file


- I use in my application the powervr api to load the pod file





Now I have the following problem:


- the position of the airplane in the blender file is the origin (0,0,0) and the camera is on the y-axis


- In the iphone app I would like to position the airplane attached to the left border and then I set the y of the viewport origin to 0 (coordinates in opengl starts at the left-bottom corner)


- when I show the view in the iphone app the airplane isn’t attached to the left border but there is some space.





I’ve added a second camera in blender that I use to export where I’ve set lens to 45.0 to have the airplane bigger but the problem remains (reduced but remains).





Any suggestion?


More in general I wonder how in opengl usually it’s managed the problem of positions and sizes for shapes imported from external files.





Thanks.





Jean


Hi jean,





I think you may have misunderstood how matrix transformations are performed in OpenGL and how camera positions relate to the positions of your models. The process of transforming geometry to render it on the screen is as follows:





World/Model Matrix (World space)(transform your model from it’s local coord space as used in your modelling application to a position in your application’s world. This step also includes scaling and rotation)


->View Matrix (View space) (this matrix represents your camera’s position and orientation. All geometry is transformed by this so that the geometry’s position relates to what the camera should see)


->Projection Matrix (Projection space)(either Orthogonal or Perspective. This determines if objects further from the camera should appear smaller based on the FoV (field of view) of the camera)


->(Screen space)(This is where the positions of your objects as seen by the camera are mapped to a 2D coord space to be used during rasterization)





Due to all of the transformations that are performed, the size of your objects and scene is completely down to your own implementation (I personally use meters as a unit of measurement to make it easier to compare the size of objects and distance in a given scene). Unfortunately, we do not currently have any material in our SDK that outlines transformations in OpenGL, but you should be able to find freely available training courses online that will introduce you to this.Joe2010-10-11 10:53:54

Hi,


Thanks for the help, I think I’ve understood my mistake.


By now in my iphone applications I’ve realized simple scenes with only parametric shapes generated in my code, and the steps I’ve executed are these:


- define the shape with a vertex buffer and an index buffer


- update the view continously rexecuting these steps:


- calcualte position, size and orientation of the shapes and apply them in modelview matrix


- apply a fix projection with the projection matrix


- render





Now I want to use shapes defined with a 3D modeler software, Blender, and import them in my applications. I’m using Collada2POD to have pod files and import them with OGLESTools.


In pod file are included many info:


- meshes


- lights


- cameras





If I reuse your code example for pod file “as is” I’m using all the data from the pod file:


- meshes (including position, size and orientation)


- lights


- cameras


And then If I want to define position, size and orientation of the meshes from code I must use from the pod file only:


- vertex and index buffer


- textures/materials


Is it right?





Now a more in general question: my apprach of using a 3D modeler only to draw shapes and then calculate position, size and rotation in the application logic of my app is correct and widespread?





I suppose that in 3d games is the only way, are there other apps that use this approach?





Thanks.


Jean





Apart from very specific scenes, in general 3D meshes for models (characters, ships etc.) are created around an origin in “model space” inside a 3D design program. Then game logic or some overriding world format is used to position these in “world space” in an actual application.

In our demos we don’t really have game logic so we tend to design an entire scene with movement and animation within a POD and simply play it back. In a game you would control the position of each mesh separately or alternatively each entity could be a POD model of its own and be positioned independently.

I believe this is a common approach for almost all 3D rendered applications except for very simple scenes.



Thanks for the help, it’s very useful.


Jean

Hi Joe,


What do you exactly mean with “I personally use meters as a unit of measurement to make it easier to compare the size of objects and distance in a given scene”?





Do you use meters also in the 3D modeler?


How do you use meters in opengl?


If you can suggest me some studies about that I’ll be really glad.





Thanks.


Jean

Sorry for the unclear explanation.


I meant to say that I use meters when modelling objects. By having a common unit of measurement, it makes it much easier to create objects that are of a relative size to each other (e.g. you can make sure a human model is smaller than a plane model).


By having your models drawn in the same unit of measurement, you will not have to worry about attempting to scale your models in your application to get them to the correct relative sizes (e.g. you do not want to have to scale a human and plane model in your application to ensure the human is smaller than the plane).