[NTLUG:Discuss] Video Card Recommendation? -- Long Redux

Bryan J. Smith b.j.smith at ieee.org
Sat May 8 00:03:18 CDT 2004


Thomas Cameron wrote:  
> Hi all -
> I need a decent video card for my development machine, a dual Xeon 1GHz Dell Precision
> Workstation 620.  I bought a Best Data GeForce4 MX 440-8X from Fry's but RHEL 2.1 doesn't
> like it.

Not surprisingly.  RHEL2.1 is based on RHL7, which pre-dates the NV17
(GF4MX) core.  You need to either update XFree86 (or X.org) to a "nv"
driver that supports the NV17, or install nVidia's "nvidia" driver.

> I need something that will run 1280x1024 at 72Hz, no gaming or 3D anything,
> just development work.

No OpenGL 3D then?  Just about anything recent will do then.

> I haven't bought a video card in years so I am totally out of touch with
> what works well.

- ATI, nVidia and proprietary extensions v. 3DLabs

I have numerous colleagues from previous semiconductor industry jobs that now
work at ATI and nVidia.  Both ATI and nVidia are very cutting edge, and their
performance show.  At the same time, both are very proprietary.  They extend
both DirectX and OpenGL in all sors of non-standard ways.  Their drivers
reflect this.

About the _only_ company out there dedicated to open standards is 3DLabs.
They are trying to get ATI and nVidia to return to standard OpenGL with
version 2.0.  This does not look like it's happening anytime soon though.
And 3DLabs doesn't sell a commodity solution anymore.

- nVidia tries to go open source, gets butt threatend by 3rd parties

In the case of nVidia, they decided to release an universal driver for their
TNT (NV0x) and GeForce (NV1x/2x/3x and, now, NV4x) drivers.  Despite popular
attitudes, nVidia _does_ release technical specifications on the NV (TNT,
GeForce) series, up to a point where they can (i.e. original/non-3rd party
stuff).  The only time nVidia did _not_ was the now 10+ year old Riva
series (I hate it when I get quoted stuff on 10-year old Riva when I talk
about GeForce!).  When they first released the TNT/GeForce drivers for
X with 3D acceleration, nVidia _did_ released them for XFree86 3.3.x _with_
source code.

But no matter how much they "stripped out" of the drivers, Intel, Microsoft
and many other copyright and patent holders threatened to sue their butt off.

So once XFree86 4.x was released, with the binary-only driver model, nVidia
just decided to develop an "universal codebase" for MacOSX-Linux-Windows,
including full GLX (OpenGL over X-Windows) support.  That way nVidia could
leave in _all_ of the acceleration, giving Linux as fast and feature-rich
performance in Linux as Windows.

Now this includes a corresponding kernel module, part GPL (which is in the
stock kernel), and part non-GPL (which is an add-on) which "taints" the
kernel.  The reason for non-GPL/source code is not totally nVidia's fault,
but Intel's as well.  You see, AGP is a non-standard, bastardized PCI
bus that Intel considers a 'trade secret,' long story.

- AGP no longer a 'trade secret'

But the "Good News" is that now that _standardized_ PCI-Express is coming
out, AGP is no longer considered a "trade secret" by Intel, and nVidia is
GPL'ing a lot of their kernel module source.  At the same time, PCI-Express
is a full, open standard, so there will _not_ be a repeat either.

[ BTW, don't confuse the TNT/GeForce _video_ drivers with the nForce
_chipset_ drivers.  nVidia releases _all_ nForce drivers as _GPL_ in the
_stock_ i810 driver, _except_ the NIC, which is a design licensed from a
3rd party.  Even the AGPGART has gone GPL because of the Intel change
on the "trade secret" stuff. ]

- ATI is changing its tune

ATI has tried to be community-focused, releasing _all_ specs and working
with X, UtahGLX, DRI and others in creating open source 3D support.  But
it has often been late, slow and totally _lacking_ in features compared
to their Windows drivers.  Even on the 2D end, they just could not keep
up with nVidia.

ATI is now releasing binary drivers for XFree86 4.x just like nVidia.
In fact, starting with the R300+ (Radeon 9500+ series), ATI has
_stopped_ releasing the specs so open source 3D drivers can be written.

ATI has been just as proprietary as nVidia when it comes to non-standard
extensions of DirectX/OpenGL, so it really sickens me when people get
all anti-nVidia and pro-ATI.  If you want to support a vendor that
supports Linux and standards, support 3DLabs.


Steve Baker:  
> Yeah - I've been using those in (literally) hundreds of
> Linux machines over the past year or more.
> There is nothing inherently wrong with it - except that
> you'll want to install the nVidia drivers (which are closed-source).

So are ATI's latest as well.  Again, see my discussion above.


Kevin Hulse wrote:  
> Is there any chance that RHEL is "behind" on 
> driver support?

Yes.  RHEL2.1 is based on RHL7, quite old now.

> I have run a GeForce4 just fine under recent versions of Debian
> Test & Mandrake. The drivers that come with Xfree aren't great,
> but they should be suitable for low intensity work.

They work.  nVidia actually puts _more_ people on the open source
"nv" XFree86 2D driver team than ATI from my understanding.  But
you need to have a recent XFree86 version to "keep up" with the NV
series changes.

> Are GeForce4 variants really that different from one another?

Yes!  Marketing at work ...

NV17 is the GeForce4 MX, and based on the _older_ GeForce2 (NV1x).
NV25 is the GeForce4 Ti, and based on the _newer_ GeForce3 (NV2x).

The "low-end" GeForce4 MX420 also uses a measly 128-bit SDR or 64-bit
DDR memory interconnect, whereas the MX440+ uses a 128-bit DDR.  That
lack of DTR really hurts the MX420.

All GeForce4 Ti are 128-bit DDR, along with the massively improved
GeForce3 (NV3x) core.


Kevin Hulse wrote:  
> I was thinking the same thing actually. I usually stay away from
> the fastest version of AGP for this very reason.

AGP is a PCI bastard that tries to act like a CPU, but over the I/O
bus.  This is called Direct in-Memory Execution (DiME).  DiME basically
let's the AGP card act like a second CPU, directly using memory,
but without the CPU coherency you'd normally get out of Intel SMP or
Athlon-Alpha EV6 MP.   That's dangerous!

As such, I recommend people _disable_ the AGPGART in their kernel,
or set the option for the nVidia drivers.  Even nVidia recommends
this.  If the AGPGART is diabled, then the AGP bus simply acts
like little more than a very fast PCI bus -- pure I/O (and only
direct memory access, DMA, for copy to/from, not DiME).

Unfortunately, Windows users are SOL because many chipset-video card
combinations will NOT work with if the AGPGART is disabled.  This is
Intel-Microsoft clusterfscking at its finest (long story). 

Now Non-Uniform Memory Architecture (NUMA) implementations like 
Athlon64/Opteron help solve this issue by putting the memory local
to each CPU, which means that the AGP card _must_ go _through_ the CPU
_first_ to memory (via the HyperTransport interconnect) so coherency
can be maintained, but the performance hit is significant as AMD
has found out.

I have _long_ argued that the GPU (graphics processor unit) _should_
be on the CPU interconnect, since it acts more like a CPU than an
I/O controller.  Some specialty (i.e. very expensive) Opteron-based
systems have this, a direct HyperTransport GPU, but they will not
become commonplace anytime soon.  It doesn't buy you much out of
mega-high-end simulation/specialty systems.

The main reason is because Intel has _finally_ "caved in" on AGP
being an utter stability issue.  PCI-Express re-introduces a
regular I/O back into the loop for graphics, at minimal, additional
cost.  Newer mainboards will have a PCI-Express x16 slot for newer
graphics cars.

> These sorts of problems even crop up idependent of OS.

Of course!  You've got an I/O board acting like a CPU!

> Although, sometimes you just manage to find one of those "magic
> combinations".

Yeah, like the fact that nVidia chipsets and nVidia video cards
have their own, proprietary signaling and command set.  It's not
nVidia's fault, but Intel's, because AGP is a bastard.

Heck, nVidia's support of HyperTransport and insistance on supporting
PCI-Express _independently_ of Intel c/o the PCI Working Group is why
Intel _finally_ came around.


-- 
Bryan J. Smith, E.I. -- Engineer, Technologist, School Teacher
b.j.smith at ieee.org





More information about the Discuss mailing list