Roy Lichtenheldt
2018-10-05 09:52:24 UTC
Hi,
I wrote a simulation app using OpenGL compute shaders based on OSG (so one frame equals one simulation step). To run several simulations at one time on one pc, I use several X-Screens to address the different cards. For maximum performance Vsync is disabled.
This works fine so far, but if I run a simulation (osg app) on both of the cards, i.e. each on one of the x-Screens (doesn't matter if rendering to a frame buffer or to the screen), the fps drop by 50% and the CPU as well as GPU load of the applications drop from ~100% to 50% too.
Nvidia optimizations for multithreading do not make any difference.
I use mainly Nvidia gpus (Pascal) and Linux with Xserver.
How can I achieve 100% load on both gpus and cpus running the two apps?
cheers,
Roy
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75041#75041
I wrote a simulation app using OpenGL compute shaders based on OSG (so one frame equals one simulation step). To run several simulations at one time on one pc, I use several X-Screens to address the different cards. For maximum performance Vsync is disabled.
This works fine so far, but if I run a simulation (osg app) on both of the cards, i.e. each on one of the x-Screens (doesn't matter if rendering to a frame buffer or to the screen), the fps drop by 50% and the CPU as well as GPU load of the applications drop from ~100% to 50% too.
Nvidia optimizations for multithreading do not make any difference.
I use mainly Nvidia gpus (Pascal) and Linux with Xserver.
How can I achieve 100% load on both gpus and cpus running the two apps?
cheers,
Roy
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=75041#75041