Diggory Hardy
2007-02-23 20:38:49 UTC
Hi,
I've been building my own project for a while, until recently using
OpenGL directly for rendering. I've made my own quaternion class for
this, generating matrices to multiply the opengl modelview and
projection matrices, and it all worked properly.
I recently started implementing OSG into my project to replace the
direct use of OpenGL, and have yet to work out quite how to implement
all the transforms (currently I can see things, but in a very wrong
way). I read in the osg::Matrixd class source file that OSG uses
post-multiplication (why not use premultiplication like everyone else,
including OpenGL?), so I'm assuming the martices should be such that vM
is the transformed version of a vector v. After having worked out
roughly how to use cameras, etc, I have roughly the following structure
(where a -> b means b is a child of a, and a -># b means a has multiple
children like b):
osgUtil::SceneView -> osg::Group -># osg::CameraNode -> osg::Group (root
of the real graph; this is the same object for all cameranodes) ->#
osg::MatrixTransform -> osg::Node (this is an object loaded by osgDB)
The osg::MatrixTransform is set to R*T where R is a rotation matrix and
T a translation matrix (for the position/orientation of the object
attached as a child).
I noticed that for the osg::CameraNode, it's possible to set both the
modelview and the projection matrix. I've left the modelview matrix as
the identity, and set the projection matrix to:
object.invTransMatrix() * camera.invTransMatrix() *
osg::Matrixd.makeFrustum(...)
where invTransMatrix() gives (-T)*(R^-1) [i.e. a negative translation to
move to the origin, then an inverse rotation], for the camera object
position [object], and the camera position relative to the object [camera].
I then render the scene via:
sceneView->setFrameStamp(...)
sceneView->update()
sceneView->cull()
sceneView->draw()
So could anyone point out what I've done wrong?
Some questions I have regarding this:
Will adding the cameranodes to the sceneview like this work properly
or is there a better way? (The cameranodes each have a unique viewport
and all point to the same data)
How are the matrices (for both the transforms and the
cameranodes/sceneview) handled? Are they just multiplied together in
some order (what?) and passed to OpenGL as the modelview/projection
matrices (in which case premultiplication is used for the final vertex
adjustment, right?) ?
In short, if anyone could point out what I've done wrong or answer the
last questions above I would be grateful.
Diggory Hardy
I've been building my own project for a while, until recently using
OpenGL directly for rendering. I've made my own quaternion class for
this, generating matrices to multiply the opengl modelview and
projection matrices, and it all worked properly.
I recently started implementing OSG into my project to replace the
direct use of OpenGL, and have yet to work out quite how to implement
all the transforms (currently I can see things, but in a very wrong
way). I read in the osg::Matrixd class source file that OSG uses
post-multiplication (why not use premultiplication like everyone else,
including OpenGL?), so I'm assuming the martices should be such that vM
is the transformed version of a vector v. After having worked out
roughly how to use cameras, etc, I have roughly the following structure
(where a -> b means b is a child of a, and a -># b means a has multiple
children like b):
osgUtil::SceneView -> osg::Group -># osg::CameraNode -> osg::Group (root
of the real graph; this is the same object for all cameranodes) ->#
osg::MatrixTransform -> osg::Node (this is an object loaded by osgDB)
The osg::MatrixTransform is set to R*T where R is a rotation matrix and
T a translation matrix (for the position/orientation of the object
attached as a child).
I noticed that for the osg::CameraNode, it's possible to set both the
modelview and the projection matrix. I've left the modelview matrix as
the identity, and set the projection matrix to:
object.invTransMatrix() * camera.invTransMatrix() *
osg::Matrixd.makeFrustum(...)
where invTransMatrix() gives (-T)*(R^-1) [i.e. a negative translation to
move to the origin, then an inverse rotation], for the camera object
position [object], and the camera position relative to the object [camera].
I then render the scene via:
sceneView->setFrameStamp(...)
sceneView->update()
sceneView->cull()
sceneView->draw()
So could anyone point out what I've done wrong?
Some questions I have regarding this:
Will adding the cameranodes to the sceneview like this work properly
or is there a better way? (The cameranodes each have a unique viewport
and all point to the same data)
How are the matrices (for both the transforms and the
cameranodes/sceneview) handled? Are they just multiplied together in
some order (what?) and passed to OpenGL as the modelview/projection
matrices (in which case premultiplication is used for the final vertex
adjustment, right?) ?
In short, if anyone could point out what I've done wrong or answer the
last questions above I would be grateful.
Diggory Hardy