Discussion:
Stuck on getting Framebuffer Objects (FBO) to work for me
E. Wing
2007-01-10 04:54:27 UTC
Permalink
I'm trying to render to an image which I can do things with. My most
immediate desire is to throw the image at a printer (or Apple's
built-in PDF writer though I intend to reuse the code to do other
things that require an image capture.)

To do this, I got it in my head that the best way to do this is to use
a Framebuffer Object. It seemed this is the OpenGL
standard/cross-platform way to do things and it also allows me to
render the scene at an optimal size that is not necessarily tied to
the onscreen window size (good if I need to adjust sizes for printing
to paper or whatever).


So being new to FBO's, I took heavily from the osgprerender example
(which does work). But unlike the osgprerender example, I want to
render to an osg::Image and don't need to reuse the image in a texture
within the scene. But so far, I am only getting a blank osg::Image
when I try. I probably misunderstand how to setup everything.

Can anybody tell me where I'm going wrong? Below is an excerpt of my code.

Thanks,
Eric

// In my draw routine
{
osg::Camera* the_camera = simpleViewer->getCamera();
the_camera->setRenderOrder(osg::Camera::PRE_RENDER);
the_camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);


osg::ref_ptr<osg::Image> osg_image = new osg::Image;
osg_image->allocateImage(viewport_width, viewport_height, 1,
GL_RGBA, GL_UNSIGNED_BYTE);

// attach the image so its copied on each frame.
the_camera->attach(osg::Camera::COLOR_BUFFER, osg_image.get());
// I tried with a callback and without, same result
the_camera->setPostDrawCallback(new
MyCameraPostDrawCallback(osg_image.get()));

simpleViewer->frame();
[[self openGLContext] flushBuffer];
// Need to block to make sure image gets rendered so I can use it?
glFinish();

// write the image here
osgDB::writeImageFile(*osg_image, "/tmp/MyWriteInDraw.png");

// Restore normal rendering
the_camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER);
the_camera->detach(osg::Camera::COLOR_BUFFER);
}


// My Callback
class MyCameraPostDrawCallback : public osg::Camera::DrawCallback
{
public:
MyCameraPostDrawCallback(osg::Image* image):
_image(image)
{
}

virtual void operator () (const osg::Camera& /*camera*/) const
{
if (_image && _image->getPixelFormat()==GL_RGBA &&
_image->getDataType()==GL_UNSIGNED_BYTE)
{
osgDB::writeImageFile(*_image, "/tmp/MyWriteImageInCallback.png");
}
}
osg::Image* _image;
};
Robert Osfield
2007-01-10 09:06:36 UTC
Permalink
Hi Eric,

I haven't tried making an FBO camera the main camera of a
SimpleViewer/Viewer yet, so I suspect it this is what is causing the
problem - osgprerender embeds the camera in to the scene.

I'll put this is on my TODO list to wire up the topmost Camera with
its FBO support.

Robert.
Post by E. Wing
I'm trying to render to an image which I can do things with. My most
immediate desire is to throw the image at a printer (or Apple's
built-in PDF writer though I intend to reuse the code to do other
things that require an image capture.)
To do this, I got it in my head that the best way to do this is to use
a Framebuffer Object. It seemed this is the OpenGL
standard/cross-platform way to do things and it also allows me to
render the scene at an optimal size that is not necessarily tied to
the onscreen window size (good if I need to adjust sizes for printing
to paper or whatever).
So being new to FBO's, I took heavily from the osgprerender example
(which does work). But unlike the osgprerender example, I want to
render to an osg::Image and don't need to reuse the image in a texture
within the scene. But so far, I am only getting a blank osg::Image
when I try. I probably misunderstand how to setup everything.
Can anybody tell me where I'm going wrong? Below is an excerpt of my code.
Thanks,
Eric
// In my draw routine
{
osg::Camera* the_camera = simpleViewer->getCamera();
the_camera->setRenderOrder(osg::Camera::PRE_RENDER);
the_camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
osg::ref_ptr<osg::Image> osg_image = new osg::Image;
osg_image->allocateImage(viewport_width, viewport_height, 1,
GL_RGBA, GL_UNSIGNED_BYTE);
// attach the image so its copied on each frame.
the_camera->attach(osg::Camera::COLOR_BUFFER, osg_image.get());
// I tried with a callback and without, same result
the_camera->setPostDrawCallback(new
MyCameraPostDrawCallback(osg_image.get()));
simpleViewer->frame();
[[self openGLContext] flushBuffer];
// Need to block to make sure image gets rendered so I can use it?
glFinish();
// write the image here
osgDB::writeImageFile(*osg_image, "/tmp/MyWriteInDraw.png");
// Restore normal rendering
the_camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER);
the_camera->detach(osg::Camera::COLOR_BUFFER);
}
// My Callback
class MyCameraPostDrawCallback : public osg::Camera::DrawCallback
{
_image(image)
{
}
virtual void operator () (const osg::Camera& /*camera*/) const
{
if (_image && _image->getPixelFormat()==GL_RGBA &&
_image->getDataType()==GL_UNSIGNED_BYTE)
{
osgDB::writeImageFile(*_image, "/tmp/MyWriteImageInCallback.png");
}
}
osg::Image* _image;
};
_______________________________________________
osg-users mailing list
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/
E. Wing
2007-01-10 11:05:37 UTC
Permalink
Post by Robert Osfield
Hi Eric,
I haven't tried making an FBO camera the main camera of a
SimpleViewer/Viewer yet, so I suspect it this is what is causing the
problem - osgprerender embeds the camera in to the scene.
I'll put this is on my TODO list to wire up the topmost Camera with
its FBO support.
Robert.
So just to see if I didn't mess anything else up and can get an image,
should I insert a temporary camera node into the scene under the root
node and do what I did? (And then remove the camera at the end of the
function.)

Thanks,
Eric
Robert Osfield
2007-01-10 14:04:44 UTC
Permalink
Hi Eric,
Post by E. Wing
So just to see if I didn't mess anything else up and can get an image,
should I insert a temporary camera node into the scene under the root
node and do what I did? (And then remove the camera at the end of the
function.)
You could try it.

With osgViewer I'd like to be able to use master and slave Camera's to
render to pbuffers & FBO's if they are set up to do this. Pbuffers
should just take a GraphicsContext implementation for pbuffers on each
of the main platforms. Howevert FBO's will require modifcations to
SceneView so that it sets things up for FBO's in the same way that the
CullVisitor is able to. SceneView is used as implementation detail
inside the osgViewer viewers so still plays a role in delivering the
end users functionality.

Robert.
E. Wing
2007-01-11 04:41:14 UTC
Permalink
Post by Robert Osfield
Post by E. Wing
So just to see if I didn't mess anything else up and can get an image,
should I insert a temporary camera node into the scene under the root
node and do what I did? (And then remove the camera at the end of the
function.)
You could try it.
So here's the new setup code in my draw routine.

osg::Camera* the_camera = new osg::Camera(*simpleViewer->getCamera());
osg::ref_ptr<osg::Node> root_node = simpleViewer->getSceneData();
simpleViewer->setSceneData(the_camera);
the_camera->addChild(root_node.get());

1) Create a new camera. I use the copy option to carry over the settings.

2) Save the old root node

3) Make the new camera the new root node.

4) Add the old root_node as a child of the new camera.

5) Do the same code I posted before to render to an image

6) Clean up by setting the old root node back to root (remove the new camera)
simpleViewer->setSceneData(root_node.get());

When I run this, I can now capture an image containing the OpenGL
clear color (solid blue). But I don't see the loaded model I used to
have. (If I don't clean up, then my onscreen rendering also loses the
loaded model and all I get is a solid blue scene.)

Any last things I can try to get the model captured?

Thanks,
Eric
Robert Osfield
2007-01-11 09:17:50 UTC
Permalink
Hi Eric,

I'm afraid there are too many steps for me to follow what the final
code might end looking like. Could you please just take an OSG
example modify it to do what you want and send the whole code segment.

Robert.
Post by E. Wing
Post by Robert Osfield
Post by E. Wing
So just to see if I didn't mess anything else up and can get an image,
should I insert a temporary camera node into the scene under the root
node and do what I did? (And then remove the camera at the end of the
function.)
You could try it.
So here's the new setup code in my draw routine.
osg::Camera* the_camera = new osg::Camera(*simpleViewer->getCamera());
osg::ref_ptr<osg::Node> root_node = simpleViewer->getSceneData();
simpleViewer->setSceneData(the_camera);
the_camera->addChild(root_node.get());
1) Create a new camera. I use the copy option to carry over the settings.
2) Save the old root node
3) Make the new camera the new root node.
4) Add the old root_node as a child of the new camera.
5) Do the same code I posted before to render to an image
6) Clean up by setting the old root node back to root (remove the new camera)
simpleViewer->setSceneData(root_node.get());
When I run this, I can now capture an image containing the OpenGL
clear color (solid blue). But I don't see the loaded model I used to
have. (If I don't clean up, then my onscreen rendering also loses the
loaded model and all I get is a solid blue scene.)
Any last things I can try to get the model captured?
Thanks,
Eric
_______________________________________________
osg-users mailing list
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/
Stephan Maximilian Huber
2007-01-11 10:05:13 UTC
Permalink
Hi Eric,
Post by E. Wing
Any last things I can try to get the model captured?
Hmm. I did similar things in a project (capture part of the scene into a
different sized image with even more decoration). My steps:

1. create a new cameranode, and configure it
2. add the part of the scene I want to render into an image to a group
3. add that group to the cameranode
4. add the cameranode to the root-scene
5. call osg's renderloop
6. get the image from the cameranode, remove cameranode from root-scene

this worked for me without a hitch. Perhaps you have to configure your
new cameranode manually, without copying from your existing camera.

HTH,
Stephan
E. Wing
2007-01-12 02:27:49 UTC
Permalink
Post by Stephan Maximilian Huber
Hmm. I did similar things in a project (capture part of the scene into a
1. create a new cameranode, and configure it
2. add the part of the scene I want to render into an image to a group
3. add that group to the cameranode
4. add the cameranode to the root-scene
5. call osg's renderloop
6. get the image from the cameranode, remove cameranode from root-scene
this worked for me without a hitch. Perhaps you have to configure your
new cameranode manually, without copying from your existing camera.
Yeah, ultimately, I might want to modify my print output so it is a
little bit different than the regular view.

So I tried not copying the camera as you suggested and setting up the
attributes separately. This worked with one item that I don't
understand. Maybe somebody can explain this.

I copied the root_camera options one by one to my new camera. But it
turns out that I have to use a different setting for
setReferenceFrame(). I must set it to use osg::Transform::ABSOLUTE_RF.
(The simpleViewer camera is default set to RELATIVE_RF.) If I don't
set this value, I continue to get my plain blue screen. (I suspect the
camera is looking in the wrong direction.) So what's going on here and
why are these values not the same?

Here's the code snippet for reference.
osg::Camera* root_camera = simpleViewer->getCamera();
osg::Camera* the_camera = new osg::Camera;
the_camera->setClearColor(root_camera->getClearColor());
the_camera->setClearMask(root_camera->getClearMask());
the_camera->setProjectionMatrix(root_camera->getProjectionMatrix());
the_camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
// the_camera->setReferenceFrame(root_camera->getReferenceFrame());
the_camera->setViewMatrix(root_camera->getViewMatrix());
the_camera->setViewport(root_camera->getViewport());

Thanks,
Eric
Stephan Maximilian Huber
2007-01-12 10:14:58 UTC
Permalink
Hi Eric,
Post by E. Wing
I must set it to use osg::Transform::ABSOLUTE_RF.
(The simpleViewer camera is default set to RELATIVE_RF.) If I don't
set this value, I continue to get my plain blue screen.
This makes sense to me. IMHO if you use RELATIVE_RF the
root-camera-projection matrix is multiplied with your temporary camera's
projection matrix.


Stephan
E. Wing
2007-01-12 02:36:18 UTC
Permalink
One other thing. This might be an OS X issue, but I'll ask just in case.
I noticed that when I add this new Camera node and render to
offscreen, I lose the image in my onscreen view for a moment. I know
if I don't undo the camera node insertion, then I will continue to get
a plain blue screen.

So I'm thinking that something is being rendered to the screen, even
though it shouldn't be. Have you experienced anything like this? I'm
also wondering if there is a way I can get the FBO snapshot without
having to modify my scene graph. It seems a little strange to me to
modify the scene graph for this.

Thanks,
Eric
Robert Osfield
2007-01-12 09:14:01 UTC
Permalink
Hi Eric,

I doubt its an OSX specific issue, as to what it could be I can't
really guess, I can't build a mental model of what your apps looks
like now so I don't have any chance of guessing what might be up.

All I can say is that I want to support taking snapshots with
osgViewer, the three ways I can see are

1) Grab the frame buffer contents just like
osgProducer::ViewerEventHandler does right
now using a camera post draw callback.

2) Create a temporary Camera to render the scene using an existing
graphics context
and a fbo or simply use the back buffer.

3) Crate a temporary Camera to render the scene using a pbuffer
graphics context.

As part of the porting work osg the osgviewer application I'd like to
implement a screen grab function, when I do this I'll have a play with
the above options. Perhaps this work will provide some guidance for
you.

Robert.
Post by E. Wing
One other thing. This might be an OS X issue, but I'll ask just in case.
I noticed that when I add this new Camera node and render to
offscreen, I lose the image in my onscreen view for a moment. I know
if I don't undo the camera node insertion, then I will continue to get
a plain blue screen.
So I'm thinking that something is being rendered to the screen, even
though it shouldn't be. Have you experienced anything like this? I'm
also wondering if there is a way I can get the FBO snapshot without
having to modify my scene graph. It seems a little strange to me to
modify the scene graph for this.
Thanks,
Eric
_______________________________________________
osg-users mailing list
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/
Stephan Maximilian Huber
2007-01-12 10:19:55 UTC
Permalink
Post by E. Wing
One other thing. This might be an OS X issue, but I'll ask just in case.
I noticed that when I add this new Camera node and render to
offscreen, I lose the image in my onscreen view for a moment. I know
if I don't undo the camera node insertion, then I will continue to get
a plain blue screen.
So I'm thinking that something is being rendered to the screen, even
though it shouldn't be. Have you experienced anything like this?
What Render-target-implementation do you specify? I*ve seen this too,
but choosing another implementation (PIXEL_BUFFER_RTT) did the trick for me.

Stephan
E. Wing
2007-01-15 19:34:17 UTC
Permalink
Post by Robert Osfield
I doubt its an OSX specific issue, as to what it could be I can't
really guess, I can't build a mental model of what your apps looks
like now so I don't have any chance of guessing what might be up.
What Render-target-implementation do you specify? I*ve seen this too,
but choosing another implementation (PIXEL_BUFFER_RTT) did the trick for me.
Thanks for the replies. I think it is an OS X specific issue. It turns
out if I remove my call to [[self openGLContext] flushBuffer]; then
the scene no longer gets rendered to the screen and everything works
as I expect. I presume I should be calling glFlush(); or glFinish();
(not sure which) in its absence, but it actually draws correctly (so
far) without anything.

Thanks,
Eric
E. Wing
2007-01-15 19:46:00 UTC
Permalink
Post by Stephan Maximilian Huber
Post by E. Wing
I must set it to use osg::Transform::ABSOLUTE_RF.
(The simpleViewer camera is default set to RELATIVE_RF.) If I don't
set this value, I continue to get my plain blue screen.
This makes sense to me. IMHO if you use RELATIVE_RF the
root-camera-projection matrix is multiplied with your temporary camera's
projection matrix.
Right. I forgot that the multiple camera nodes in my graph would
produce a cumulative effect, not override the parent cameras.

Thanks,
Eric

Loading...