@ronvalencia:
The Xbox's graphical prowess comes from its NV2A GPU, not the PIII. It's the NV2A's hardware T&L that does all the geometry processing, not the PIII. The PIII doesn't need to do any graphical processing, but it's the NV2A that does all the graphical processing, from the geometry to the rendering. The PIII had very little to do with the Xbox's graphical capabilities. The main reason Microsoft selected the PIII as the Xbox's CPU is because of its compatibility with PC. As we've already discussed previously, the SH-4 performs faster geometry calculations than a PIII 800. But that's unnecessary for the Xbox if its NV2A GPU already has its own more powerful hardware T&L for geometry processing.
SuperH had a more efficient instruction set than x86 at the time. A SuperH instruction never exceeds 2 bytes, whereas an x86 instruction can take up to 15 bytes. There was also a benchmark that found "SH3 architecture generates smaller code than the x86". SuperH's efficient instruction set was licensed by ARM to create the Thumb instruction set, now widely used by mobile devices. The SH-4 also introduced 4D vector instructions, which x86 lacked at the time, allowing the SH-4 to perform faster geometry calculations (as we've already discussed previously).
Like I said previously, the Dreamcast doesn't need as much external memory bandwidth, due to its tiled-rendering architecture and deferred-rendering capabilities. Its fast on-chip cache (with up to 15 GB/s bandwidth) renders a tile-buffer and handles Z-sorting on the chip, meaning there's no need to render a Z-buffer (or even a framebuffer) in RAM. And its deferred-rendering capabilities means it only needs to render a fraction of the polygons and textures in a scene, as it only renders the polygons and textures actually visible on the screen, without any overdraw. So the only polygons and textures it needs to access from RAM are the ones visible on screen. The Dreamcast's tiled-rendering architecture has far more efficient bandwidth usage, equivalent to a non-tiled GPU with up to 6 GB/s memory bandwidth (and that's without even taking texture compression into account).
The GeForce 3 released in 2001, so it's irrelevant. The 4x AGP transmission bus was a limitation in the year we're actually talking about, 1999.
Far Cry's minimum system requirements include a PIII or Athlon running at 1 GHz, along with a GeForce2 or Radeon 8500 (or a 64 MB graphics card). That goes well beyond what a 1999 PC could handle.
As for that 3DMark2000 benchmark, its polygon count for the GF256 (with PIII 800E) is only about 1.7 million polygons/sec. In comparison, DOA2 was pushing over 4 million textured polygons/sec on the DC. The 3DMark2000 benchmark also only reaches up to 3.41 MB texture data per frame. In comparison, Shenmue was pushing 5 MB texture data per frame on the DC (equivalent to 25-30 MB uncompressed textures), and that's only for the textures actually visible on screen (equivalent to about 10-20 MB per frame, or 50-100 MB uncompressed). In other words, that GeForce 256 benchmark has already been outperformed by actual Dreamcast games in terms of polygon counts and texture data.
Log in to comment