It started with the GeForce 256, and nVidia released some nice demos promising high polygon count 3D graphics. But when where we to expect such games? A year later? Two years? Today? Let's look at a game that was "made for" the GF2: Giants. It has "ok" polygon count and pretty nice graphics with Dot3 Bumpmapping. But, there was this framerate problem, and I belive that's because the GF2 didn't have a fast enough T&L engine.
If you have a GF2 (even MX version) or a GF3 (even Ti 500 version) please load
up giants, start a new game (level 1) and right at the place you start look down
towards the mini-reapers (the little forest with bushes). The framerate should
drop down to around 30, and you can see the lag when you move your mouse.
As the GF3's (all versions) T&L performance is the same as the GF2 (even the
MX), you will get about same results with all GF2s and GF3s.
And remember Giants was ported to a Playstation 2? In the conversion process
they added much more detail, and that proves my argument that the GeForce 2
(which I recall someone said Giants was "optimized" for) has much lower polygon
performance than the Playstation2. If we take the specs of the GF2 and the PS2:
These are theoretical, my GF2 scores about 20MT/s in 3DMark2001s High Polygon
Count test, so I would say the PS2 can do 50MT/sec? Remember the PS2 is a
console with everything inside completely optimized.
So the PS2 is faster then the GF2, and as the GF3 is about as fast, we one
year later are still far behind the PS2. Who said PC would catch up with
consoles 6 months later?
link to non-correct spec-chart
Here you can see the GF4 is listed as 136M/s in "Quoted Triangle Rate." To the right you can see the official "quote" of polygon performance, only here it is presented in vertices/sec. The difference is one polygon/triangle (the same thing) consist of 3 vertices. One might say presenting polygon performance in vertices is more correct as 3D-object's polygons share vertices. The real performance number should be 136/3=45. I have seen GF4 owners reporting a 3dmark2001 High Polygon Count score of 48, which makes me belive the theoretical polygon performance of the GF4 is 60MTriangles/s. This is when triangles shares vertices.
So isn't 60MTriangles/s good enough? No, it is not. On the XBox it is, since the (good) games will use the XGPU's HOS, or RT-Patches, which frees up a lot of polygon calculation for other stuff while still having round surfaces. It's called something like Beizer Curves I think. It does a "quick" Beizer calculation and divides a triangle up in a smoothing way. Efficiently making these heads and these fingers round and smooth. Much like this cute face and these fingers.
Check the last image again, see the fingers? She have nails. Then look at the background, pretty complex, and here's a better example. Here's some more examples: Glasshouse and Forest. And let me finish by combining the cute and smooth characters with the pretty darn high-poly arenas: Castle fight and Forest Showdown.
Back to the topic, as you saw the heads and the fingers of those Doom III shots didn't look so good after all. The heads and the fingers weren't "CG Graphical" at all. Primarily I belive it was because they did not show Doom III using highest-quality setting, and secondly the Doom III engine cannot use HOS (RT-Patches on the GF4). This is as far as I understood because of the way the Doom III engine is, it uses self-shadowing on the characters/3D-objects. GF4's RT-Patches and Radeons TruForm (N-Patches) cannot self-shadow.
Coming to an end here I will try to answer when we are to see nice round cute girls and high-polycount maps. First take a look at a Deus Ex 2 shot. See the nice round face (and boobs )? I think it is some form of HOS as round objects like that require lots of polygons. Ok, so it doesn't look so bad after all? Wrong. Well, at least what I think: The budget graphics cards lack any good polygon performance, and the even-lower budget cards lacks a vertex shader completely (as if 1 vertex shader was enough).
Check link to non-correct spec-chart again. The Radeon 9000 Pro (that Pro makes it sound better then the 9700) is quoted to have only 35MTriangles/sec. When will PC games be realistic-like when developers have to hold back on the polycount? And yes, 35MT/s is low, remember Giants got a facelift when it was ported to the PS2? That proves the PS2 is much faster then a GF2, and a GF2 was quoted to have 20MT/sec. Here we are then, almost 2 years later, with a "medium" costly graphics card only sporting half the performance. I use Warcraft 3 and Max Payne as an example.
3 years after GF1 was released with it's "revolutionary" Hardware T&L, we still suffer extremely low polycount.
My friends GF2MX T&L score:
My GF2 GTS T&L score:
8 lights= 3.0
My GF3 T&L score:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ width space workaround _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
8 lights= 6.0
1. Giants was "made for" the GF2 GTS
2. Giants got a polygon facelift when ported to the PS2
3. The GF3's polygon performance is the same as the GF2, but it can be programmed
4. Budget cards have poor polygon performance
5. Still I haven't heard of any games utilizing GF4s RT-patches feature (maybe some TruForm supporting games out there, but they only got round surfaces not more detailed weapons/characters/environments