Discussion:
anyone have experience with OSG on Intel HD Graphics 4000 hardware?
Michael Schanne
2013-02-05 18:20:00 UTC
Permalink
Hi,

Does anyone have experience with using OSG on the Intel HD Graphics 4000 hardware? I am having performance problems in my application with fairly simple scenes. I have several different models which are simple shapes like boxes, tubes, X's, etc. that are duplicated about 40 times each using MatrixTransforms. The bottleneck is in the Draw traversal. With this scene I am seeing a draw time of 55 ms on the HD Graphics, while on another PC with a Radeon X1550 card the draw time is 3.8 ms.

I've used OpenGL Extensions Viewer (http://www.realtech-vr.com/glview/) on both machines and according to that the HD Graphics only supports OpenGL 3.0/3.1, with no support for earlier versions. My development PC with the Radeon card supports 1.1-2.1. I'm currently using OSG 3.0.0, with a build that turned on OSG_GL1_AVAILABLE and OSG_GL2_AVAILABLE, but not OSG_GL3_AVAILABLE. Is it possible that using only GL3 would give better performance?

One other thing I should mention is that I used gDebugger and their Teapot sample application performs better on the Intel HD Graphics than the Radeon card, which suggests to me that I'm not using the hardware correctly.

Using alternate hardware is not an option at this time.

...

Thank you!

Cheers,
Michael

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52393#52393
Robert Osfield
2013-02-05 18:30:25 UTC
Permalink
Hi Michael,

Don't worry about the OpenGL version, the OSG by default will detect
what is available at runtime and use whatever features are available.
The only time you need to play with the OSG_GL*_AVAILABLE settings is
if you are compiling against a very specific OpenGL version and don't
wan't more advanced or old features available.

As for the Intel graphics hardware, I haven't used the 4000 so can't
specifically on this. Historically the Intel drivers have been rather
flaky and inconsistent w.r.t performance. It could be that a
particular OpenGL feature is running very slow due to driver/hardware
limitations. It could be OpenGL display lists, or something else
similar that is tripping up the driver. General AMD/ATI and NVidia
drivers are far more well rounded than Intel drivers so don't
typically see unexpected performance drops or bugs.

Robert.
Post by Michael Schanne
Hi,
Does anyone have experience with using OSG on the Intel HD Graphics 4000 hardware? I am having performance problems in my application with fairly simple scenes. I have several different models which are simple shapes like boxes, tubes, X's, etc. that are duplicated about 40 times each using MatrixTransforms. The bottleneck is in the Draw traversal. With this scene I am seeing a draw time of 55 ms on the HD Graphics, while on another PC with a Radeon X1550 card the draw time is 3.8 ms.
I've used OpenGL Extensions Viewer (http://www.realtech-vr.com/glview/) on both machines and according to that the HD Graphics only supports OpenGL 3.0/3.1, with no support for earlier versions. My development PC with the Radeon card supports 1.1-2.1. I'm currently using OSG 3.0.0, with a build that turned on OSG_GL1_AVAILABLE and OSG_GL2_AVAILABLE, but not OSG_GL3_AVAILABLE. Is it possible that using only GL3 would give better performance?
One other thing I should mention is that I used gDebugger and their Teapot sample application performs better on the Intel HD Graphics than the Radeon card, which suggests to me that I'm not using the hardware correctly.
Using alternate hardware is not an option at this time.
...
Thank you!
Cheers,
Michael
------------------
http://forum.openscenegraph.org/viewtopic.php?p=52393#52393
_______________________________________________
osg-users mailing list
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Terry Welsh
2013-02-06 05:04:00 UTC
Permalink
Hi Michael,
I have my in-development OSG-based game running great on Intel 4000.
So the hardware itself is quite cabable and definitely the best
graphics part Intel has ever made. It had a horrible framerate at
first because of a depth buffer copy that I was doing. Once I removed
that, my framerate went up by an order of magnitude. Try removing big
pieces of your app or scene one at a time to simplify it until you're
left with almost nothing. You might find a bad bottleneck along the
way. It could be something as simple as a piece of OpenGL state that
your Intel driver doesn't like.

If you find a real problem you can always try to submit it to Intel,
but they probably won't listen.

This driver stuff is really hit or miss. My laptop can switch between
Radeon 7730M graphics and Intel 4000. If i use the driver supplied by
Dell (which appears to come originally from AMD) that contains both
Radeon and Intel drivers, I get a slightly better framerate from the
"power saving" Intel 4000 graphics. There are some subtle visual
artifacts using the Radeon but none using the Intel 4000. If I install
the Intel 4000 driver from Intel's website, I still get a good
framerate but there are horrible visual artifacts on my OpenGL Vertex
Array particle systems that disappear after about a minute. In short,
good luck understanding the insane mess that is graphics drivers. It's
all very random.
--
Terry Welsh
www.reallyslick.com
Message: 15
Date: Tue, 05 Feb 2013 19:20:00 +0100
Subject: [osg-users] anyone have experience with OSG on Intel HD
Graphics 4000 hardware?
Content-Type: text/plain; charset=UTF-8
Hi,
Does anyone have experience with using OSG on the Intel HD Graphics 4000 hardware? I am having performance problems in my application with fairly simple scenes. I have several different models which are simple shapes like boxes, tubes, X's, etc. that are duplicated about 40 times each using MatrixTransforms. The bottleneck is in the Draw traversal. With this scene I am seeing a draw time of 55 ms on the HD Graphics, while on another PC with a Radeon X1550 card the draw time is 3.8 ms.
I've used OpenGL Extensions Viewer (http://www.realtech-vr.com/glview/) on both machines and according to that the HD Graphics only supports OpenGL 3.0/3.1, with no support for earlier versions. My development PC with the Radeon card supports 1.1-2.1. I'm currently using OSG 3.0.0, with a build that turned on OSG_GL1_AVAILABLE and OSG_GL2_AVAILABLE, but not OSG_GL3_AVAILABLE. Is it possible that using only GL3 would give better performance?
One other thing I should mention is that I used gDebugger and their Teapot sample application performs better on the Intel HD Graphics than the Radeon card, which suggests to me that I'm not using the hardware correctly.
Using alternate hardware is not an option at this time.
...
Thank you!
Cheers,
Michael
Jean-Sébastien Guay
2013-02-09 01:14:09 UTC
Permalink
Hi Terry, all,
Post by Terry Welsh
In short,
good luck understanding the insane mess that is graphics drivers. It's
all very random.
It's only random if you try to understand it with a few limited
tests.... :-)

Someone has applied structured testing to the problem and has come up
with this:

http://www.g-truc.net/post-0538.html#menu

This is the "OpenGL driver status" posts, which Christophe Riccio posts
every month. He gathers this information using his own very
comprehensive OpenGL tests, which cover about every type of
functionality you would want to use in a very structured way. So he can
say with absolute certainty that a given driver will work with a given
usage pattern. (I wish this were used as a base for official Desktop
OpenGL conformance tests and that the results of these tests were
publicised by Khronos, so that vendors would have a real reason to keep
their drivers up to a certain level of quality... But I digress)

Then you just have to know your own app enough to know what it's doing
at the OpenGL level... Which is often the hard part :-) Tools like
gDEBugger (discontinued) or apitrace (active) can help there.

The only thing I find a pity is that he doesn't keep an easy to search
list of older drivers too. If he did, you could easily see that a given
feature worked starting with this version, broke in this one and then
was fixed in this one, and you could tell that to your clients, knowing
which features are critical to your application. You could even
automatically enable and disable features in your app by knowing which
driver versions they would work on. But maybe that's going too far...

Anyways, it's a useful resource I think.

J-S
--
______________________________________________________
Jean-Sebastien Guay jean_seb-XzQKRVe1yT0V+D8aMU/***@public.gmane.org
http://whitestar02.dyndns-web.com/
Terry Welsh
2013-02-09 21:50:12 UTC
Permalink
Hi J-S,
Looks like a very cool resource. Have you ever been able to put it to
any practical use? I could see this being helpful, but I don't see it
removing all of the randomness :) There are always undiscovered bugs
lurking here and there. I, too, would love to see driver writers
aiming for 100% scores on these tests.
--
Terry Welsh
www.reallyslick.com
Message: 1
Date: Fri, 08 Feb 2013 20:14:09 -0500
Subject: Re: [osg-users] anyone have experience with OSG on Intel HD
Graphics 4000 hardware?
Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
Hi Terry, all,
Post by Terry Welsh
In short,
good luck understanding the insane mess that is graphics drivers. It's
all very random.
It's only random if you try to understand it with a few limited
tests.... :-)
Someone has applied structured testing to the problem and has come up
http://www.g-truc.net/post-0538.html#menu
This is the "OpenGL driver status" posts, which Christophe Riccio posts
every month. He gathers this information using his own very
comprehensive OpenGL tests, which cover about every type of
functionality you would want to use in a very structured way. So he can
say with absolute certainty that a given driver will work with a given
usage pattern. (I wish this were used as a base for official Desktop
OpenGL conformance tests and that the results of these tests were
publicised by Khronos, so that vendors would have a real reason to keep
their drivers up to a certain level of quality... But I digress)
Then you just have to know your own app enough to know what it's doing
at the OpenGL level... Which is often the hard part :-) Tools like
gDEBugger (discontinued) or apitrace (active) can help there.
The only thing I find a pity is that he doesn't keep an easy to search
list of older drivers too. If he did, you could easily see that a given
feature worked starting with this version, broke in this one and then
was fixed in this one, and you could tell that to your clients, knowing
which features are critical to your application. You could even
automatically enable and disable features in your app by knowing which
driver versions they would work on. But maybe that's going too far...
Anyways, it's a useful resource I think.
J-S
Jean-Sébastien Guay
2013-02-09 22:18:36 UTC
Permalink
Hi Terry,
Post by Terry Welsh
Looks like a very cool resource. Have you ever been able to put it to
any practical use? I could see this being helpful, but I don't see it
removing all of the randomness :) There are always undiscovered bugs
lurking here and there. I, too, would love to see driver writers
aiming for 100% scores on these tests.
I think for now the main use is that it publicizes the quality of the
different drivers. However as I said, it would be much more effective if
it were used in an "official" manner, i.e. by Khronos itself.

I now work for a game company where we explicitly blacklist some driver
versions and inform the user to upgrade when the game starts up, because
some driver versions are just too bad to be any use at all. I could see
something like that being part of an add-on library to OSG, where a
database of driver versions and capabilities would be maintained by the
community. With the breadth of coverage that's possible in a large
community like OSG, I think it would be easy, actually :-) That being
said, I don't have any plans to actually do something, rather than talk
about it :-)

J-S
--
______________________________________________________
Jean-Sebastien Guay jean_seb-XzQKRVe1yT0V+D8aMU/***@public.gmane.org
http://whitestar02.dyndns-web.com/
Johannes Scholz
2013-02-11 15:15:02 UTC
Permalink
Hi,

today I compared the internal Intel HD 4000 chip with the dedicated NVS 5200M in my ThinkPad.

Compared to other Intel graphics chips I must say, that the performance of the HD4000 is quite okay, but nothing compared to the NVidia chip.

Performance is:

Shaders enabled (doing some additional render to textures):
NVidia < 1 ms Draw + 9 ms GPU -> ~100 fps
Intel HD 4000 ~ 30ms Draw + ~ 30ms GPU -> ~16 fps

Fixed OpenGL pipeline enabled:
NVidia < 1 ms Draw + 12 ms GPU -> ~100 fps
Intel HD 4000 ~20ms Draw + ~20ms GPU -> ~25 fps


Best regards,
Johannes

------------------------
---
Johannes Scholz
Software Engineer
P3 Voith Aerospace GmbH, Bremen, Germany

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52537#52537

Loading...