[NTLUG:Discuss] Linux video benchmark
Steve Baker
sjbaker1 at airmail.net
Sun Apr 17 12:05:29 CDT 2005
Jack Snodgrass wrote:
> What's a good way to compare 3d ( and 2d ) graphics performance using linux?
There are several 3D graphic benchmarks out there for OpenGL - probably there
are 2D benchmarks too.
However, graphics performance is a notoriously difficult thing to benchmark.
For example:
* If your 3D scene consists of a very large number of small triangles, neatly
packed into triangle strips - and the scene is illuminated by eight OpenGL
light sources - then the performance of the system will probably be limited
by the speed of the GPU clock and the number of vertex processors it contains.
* If your scene consists of a similarly large number of triangles - but this
time they are separated out into separate triangles - and the scene isn't
being illuminated at all - then you'll probably be limited by the data transfer
rate from the motherboard to the graphics card. Does your motherboard support
2xAGP, 4xAGP or 8xAGP?
* If your scene consists of only a few triangles but they all completely cover
the screen - and are drawn from the back towards the front - then the pixel
fill rate of the card is being tested the most and either the GPU chip's
fragment processors or the bandwidth of the on-board RAM is the limiting
factor.
* If you draw that same exact scene - but draw them from front to back then
performance is likely to be determined by whether the card has a smart Z
buffer or not.
* If your scene contains more textures than the on-board RAM can hold then
subtle details of GPU bus bandwidth, the speed of your motherboard RAM and
the order that triangles are drawn in will determine what is the limiting
factor.
...all of which makes life hell for a benchmark writer. For example, if you
compare an nVidia 5950 with an nVidia 6800, you find that in most of the tests
the 6800 comes out almost exactly twice as fast as the 5950. However, if you
write a test that's limited by fragment shader program execution speed, the
6800 comes out six times faster!
I work in flight simulation - we do a LOT of 3D work. We have tried using
off-the-shelf benchmarks - and we've tried writing our own. We did some
tests in which we compared the performance numbers we got from each of the
benchmarks against the actual measured performance of one of our applications.
We did this for six different graphics cards from different vendors. The
not-suprising thing was that the benchmarks totally failed to predict the
performance of our actual application. The VERY suprising thing was that
if you ranked the cards in order of the performance predicted by each of
the benchmarks, not one benchmark produced the same ranking as we obtained
from running our actual application.
Put more simply: If we relied on the benchmarks to tell us which board
would run our application the best - then at some time over the past
few years, they would have told us to pick a board that was not the
best for our application.
So - we measure the performance of our real applications and largely
ignore the benchmark figures.
Of course, we can only do that because we are a sufficiently large company
that we can buy every graphics card that comes out and test them.
As an individual, you can't do that.
> I want to compare a Geforce 5200 -vs- Geforce 5700 card on my system. I plan
> on testing with one card in my box, then replace it with a different
> card so I can get a 'everything else is the same' comparison.
From memory:
The only significant practical difference between cards with those two
chipsets is the speed of the memory interface. The 5200 will have a
significantly worse pixel fill rate (my recollection is that it's
more than twice as slow as the 5700). In applications that are very
sensitive to fill rate - the 5200 does badly. In applications where
fill rate isn't crucial, it holds up well.
So in the end, it depends on your application.
I havn't done 2D benchmarking - but any benchmark is probably going
to be limited by fill rate. However, in most 'real' 2D applications,
the graphics card is easily able to out-perform the CPU that's feeding
it - so you probably wouldn't be able to tell which graphics card you
have in any REAL situation. I wouldn't even bother testing them for
performance. I'd be more concerned with the quality of the image
which depends more on which company manufactured the graphics card
than on which nVidia chipset they used to do it. The quality of
analog components that the card maker uses is outside of nVidia's
control.
---------------------------- Steve Baker -------------------------
HomeEmail: <sjbaker1 at airmail.net> WorkEmail: <sjbaker at link.com>
HomePage : http://www.sjbaker.org
Projects : http://plib.sf.net http://tuxaqfh.sf.net
http://tuxkart.sf.net http://prettypoly.sf.net
-----BEGIN GEEK CODE BLOCK-----
GCS d-- s:+ a+ C++++$ UL+++$ P--- L++++$ E--- W+++ N o+ K? w--- !O M-
V-- PS++ PE- Y-- PGP-- t+ 5 X R+++ tv b++ DI++ D G+ e++ h--(-) r+++ y++++
-----END GEEK CODE BLOCK-----
More information about the Discuss
mailing list