Which consoles have been called 'PC in a box'?

  • 61 results
  • 1
  • 2
Avatar image for henrythefifth
henrythefifth

2502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 9

User Lists: 0

#1  Edited By henrythefifth
Member since 2016 • 2502 Posts

PC in a box... Many consoles have been called that, especially prior their launch when the hype was going strong and everyone was going, 'whoa, that's like a PC in a box, and no two ways about it!!!'.

But exactly which consoles have been called 'PC in a box'?

Can you remember? Can you say?

I think PC Engine might have been first to be called PC in a box. Now there was a great console! Well, apart from the horrid flicker, which made it look like a poor man's Master System clone...

Avatar image for BenjaminBanklin
BenjaminBanklin

11082

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2  Edited By BenjaminBanklin
Member since 2004 • 11082 Posts

The first Direct X console itself, the Xbox. It was essentially a P3 machine in a console, with hard drive and all. Explains when it didn't sell so great, the losses were massive. Got some great games when it was all said and done though.

Avatar image for deactivated-5ebea105efb64
deactivated-5ebea105efb64

7262

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#3 deactivated-5ebea105efb64
Member since 2013 • 7262 Posts

The new consoles are shit pcs soooo.

Avatar image for sealionact
sealionact

9816

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#4 sealionact
Member since 2014 • 9816 Posts

@Gamerno6666: ...and shit PCs are shit PCs. So?

Avatar image for Archangel3371
Archangel3371

44139

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#5 Archangel3371  Online
Member since 2004 • 44139 Posts

Wait a second. Aren’t PC’s in a “box” as well?

Avatar image for that_old_guy
That_Old_Guy

1233

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#6 That_Old_Guy
Member since 2018 • 1233 Posts

@Archangel3371: no they’re in cases. THERES A DIFFERENCE!!!

Avatar image for poptart
poptart

7298

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7 poptart
Member since 2003 • 7298 Posts

@BenjaminBanklin said:

The first Direct X console itself, the Xbox. It was essentially a P3 machine in a console, with hard drive and all. Explains when it didn't sell so great, the losses were massive. Got some great games when it was all said and done though.

Wasn't the DC also? I just remember a sticker on the console saying as much (I could be mistaken)...

Avatar image for superfluousreal
SuperfluousReal

361

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#8 SuperfluousReal
Member since 2019 • 361 Posts

@poptart said:
@BenjaminBanklin said:

The first Direct X console itself, the Xbox. It was essentially a P3 machine in a console, with hard drive and all. Explains when it didn't sell so great, the losses were massive. Got some great games when it was all said and done though.

Wasn't the DC also? I just remember a sticker on the console saying as much (I could be mistaken)...

DC ran with Hitachi & PowerVR hardware, nothing close to PC, Windows CE was just the code they used to port some games such as Resident Evil 2/3.

They did have a web browser disc, as the first games console to use internet.

Avatar image for davillain
DaVillain

56075

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#9 DaVillain  Moderator  Online
Member since 2014 • 56075 Posts
@poptart said:
@BenjaminBanklin said:

The first Direct X console itself, the Xbox. It was essentially a P3 machine in a console, with hard drive and all. Explains when it didn't sell so great, the losses were massive. Got some great games when it was all said and done though.

Wasn't the DC also? I just remember a sticker on the console saying as much (I could be mistaken)...

For the Dreamcast, I'm pretty sure that the higher end video cards of the day were more powerful than what the Dreamcast could do. DC was weaker than most PC's at the time. Again, when DC came out, there were CPUs with 30%+ more power, and GPU's like the Voodoo 2 existed for one year already. (with MORE VRAM, in fact)

The Voodoo 2 card could even be paired with another Voodoo card and could run games like Unreal at a 1024x768 resolution, high resolution textures, AA and real time reflections, something way ahead of the DC. For the Xbox 360, I think this was the only time when the Console was ahead of PC only because PC last-gen was in a rough spot until Crysis came into the fray.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#10 tormentos
Member since 2003 • 33784 Posts

@davillain- said:

For the Dreamcast, I'm pretty sure that the higher end video cards of the day were more powerful than what the Dreamcast could do. DC was weaker than most PC's at the time. Again, when DC came out, there were CPUs with 30%+ more power, and GPU's like the Voodoo 2 existed for one year already. (with MORE VRAM, in fact)

The Voodoo 2 card could even be paired with another Voodoo card and could run games like Unreal at a 1024x768 resolution, high resolution textures, AA and real time reflections, something way ahead of the DC. For the Xbox 360, I think this was the only time when the Console was ahead of PC only because PC last-gen was in a rough spot until Crysis came into the fray.

Most PC in 1998 could not hold a candle to the Dreamcast.

Sega was basically the powerhouse for graphics before the PS2 arrived.

The model 3 board was so ahead of its time and VF3 so ahead of anything on PC in 1996 it wasn't even funny.

Most PC on 1998 even more than today were greatly behind,hell today a hell of allot of PC are on xbox one X or over specs on 1998 even top of the line PC were there with the DC,let alone the majority which were behind.

Crysis was a mess of a game basically it was just a bunch of high effects thrown together with very little optimization and shitty cartoon animations,which is why Killzone 2 animations and rag doll physics looked so impressive,visually Crysis was great to look at performance wise it was a mess and animation wise a joke.

Avatar image for Pedro
Pedro

69425

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#11 Pedro
Member since 2002 • 69425 Posts

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

Avatar image for poptart
poptart

7298

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#12 poptart
Member since 2003 • 7298 Posts

@superfluousreal said:
@poptart said:
@BenjaminBanklin said:

The first Direct X console itself, the Xbox. It was essentially a P3 machine in a console, with hard drive and all. Explains when it didn't sell so great, the losses were massive. Got some great games when it was all said and done though.

Wasn't the DC also? I just remember a sticker on the console saying as much (I could be mistaken)...

DC ran with Hitachi & PowerVR hardware, nothing close to PC, Windows CE was just the code they used to port some games such as Resident Evil 2/3.

They did have a web browser disc, as the first games console to use internet.

Ah righty. It was the Windows CE sticker I remember, not DirectX. It's been a while...

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#13  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@davillain- said:

For the Dreamcast, I'm pretty sure that the higher end video cards of the day were more powerful than what the Dreamcast could do. DC was weaker than most PC's at the time. Again, when DC came out, there were CPUs with 30%+ more power, and GPU's like the Voodoo 2 existed for one year already. (with MORE VRAM, in fact)

The Voodoo 2 card could even be paired with another Voodoo card and could run games like Unreal at a 1024x768 resolution, high resolution textures, AA and real time reflections, something way ahead of the DC. For the Xbox 360, I think this was the only time when the Console was ahead of PC only because PC last-gen was in a rough spot until Crysis came into the fray.

Most PC in 1998 could not hold a candle to the Dreamcast.

Sega was basically the powerhouse for graphics before the PS2 arrived.

The model 3 board was so ahead of its time and VF3 so ahead of anything on PC in 1996 it wasn't even funny.

Most PC on 1998 even more than today were greatly behind,hell today a hell of allot of PC are on xbox one X or over specs on 1998 even top of the line PC were there with the DC,let alone the majority which were behind.

Crysis was a mess of a game basically it was just a bunch of high effects thrown together with very little optimization and shitty cartoon animations,which is why Killzone 2 animations and rag doll physics looked so impressive,visually Crysis was great to look at performance wise it was a mess and animation wise a joke.

Basic time line

Dreamcast was released in November 27, 1998 in Japan and September 9, 1999 in North America. Dreamcast is equipped with PowerVR Series 2

NVDIA GeForce 256 announced on August 31, 1999 and released on October 11, 1999 worldwide. GeForce was more successful since it enabled NVIDIA to rapidly fund further development at a higher pace when compared to Imagination Technologies (PowerVR).

American economic strength backed NVIDIA out paced UK's Imagination Technologies. Qualcomm has displaced Imagination Technologies as the premier handheld GPU. Qualcomm is protected by US government from hostile takeover e.g. blocked Broadcom on national security grounds.

GeForce 256 murdered PowerVR Series 2 which is sold as Neon 250 (Sep 29, 1999) on PC market which is no better than Voodoo 3 or RIVA TNT 2.

September 7, 2000 has GeForce 2 Ultra's release.

April 2000 has Radeon 256 (R100)'s release.

NVIDIA later ripped apart 3DFX and SGI in legal battles.

Intel (supported by Compaq) destroyed DEC in legal battles which in the process destroyed Alpha EV6 RISC CPU line.

Significant amount of ex-DEC hardware employees joined with AMD while ex-DEC software employees joined Microsoft (Windows NT). AMD's K7 Athlon EV6 engaged in a Ghz race war against Intel which nuked bystander alternative CPU instruction set such as MIPS and Alpha (South Koreas' Samsung attempted continue Alpha evolution) from desktop market.

AMD's K8 Athlon 64 (x86-64) single handily defeated both IBM PowerPC 970 (PowerPC 64bit desktop) and Intel Itanium (IA-64) from the desktop market.

PowerPC 970 was IBM's last attempt to control PC market during 64bit desktop PC migration phase.

NVIDIA defended PC's graphics/vector math leadership by releasing's GeForce 8800 GTX a few weeks ahead of PS3's launch window. NVIDIA engaged in a marketing war against IBM's CELL.

UK government didn't defend Imagination Technologies and it's ultimately dismantled, GPU IP sold to China (state owned) while MIPS IP sold to US company. US government intervene, blocked China and forced MIPS IP to be sold back to US company.

UK government didn't defend ARM Ltd from foreign takeover.

Resurgent AMD re-starts Ghz race (e.g. 5 Ghz) and core wars against Intel.

Due to GPUs entry into national security related parts vendor, AMD and NVIDIA are protected by US government. NVIDIA's GPUs are part of F-35 Block 4 avionics upgrade which is moving away from FPGA based processing.

Avatar image for SecretPolice
SecretPolice

44049

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#14 SecretPolice
Member since 2007 • 44049 Posts

OG Xbox is the right answer.

Avatar image for calvincfb
Calvincfb

0

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#15 Calvincfb
Member since 2018 • 0 Posts

All consoles are PCs in a box.

Avatar image for mrbojangles25
mrbojangles25

58299

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#16 mrbojangles25
Member since 2005 • 58299 Posts

I mean when you get right down to it, any console with an operating system could technically be called a "PC in a box" but that is sort of not true because a.) "in a box" is sort of redundant, and b.) there's a lot more to a PC gaming experience than just some parts in a box and an OS.

@Pedro said:

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

To do what console fanboys usually do, and try to take an elevated experience they don't have and belittle it down to their petty level. Why? Because they just can't accept that PC offers a better experience than their console of choice.

Instead of saying "my experience is different and I am OK with that" they have to somehow say how the other experience is as bad as theirs.

@that_old_guy said:

@Archangel3371: no they’re in cases. THERES A DIFFERENCE!!!

Unless they are in an actual box

Avatar image for davillain
DaVillain

56075

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#17  Edited By DaVillain  Moderator  Online
Member since 2014 • 56075 Posts

@tormentos: I won't denied Crysis was indeed a mess, I did however get ahead start after launch and it was then I build my first PC just to play it. I gotta admit, DC was my all time favorite Sega console, that console had great games.

@mrbojangles25 said:

@Pedro said:

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

To do what console fanboys usually do, and try to take an elevated experience they don't have and belittle it down to their petty level. Why? Because they just can't accept that PC offers a better experience than their console of choice.

Instead of saying "my experience is different and I am OK with that" they have to somehow say how the other experience is as bad as theirs.

@that_old_guy said:

@Archangel3371: no they’re in cases. THERES A DIFFERENCE!!!

Unless they are in an actual box

Like my dad used to tell me. If it works, it ain't stupid ;)

Avatar image for Ant_17
Ant_17

13634

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#18 Ant_17
Member since 2005 • 13634 Posts

I think either the Dreamcast or the OG Xbox were the 1st to be called that.

Avatar image for Pedro
Pedro

69425

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#19  Edited By Pedro
Member since 2002 • 69425 Posts

@mrbojangles25 said:

@Pedro said:

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

To do what console fanboys usually do, and try to take an elevated experience they don't have and belittle it down to their petty level. Why? Because they just can't accept that PC offers a better experience than their console of choice.

Instead of saying "my experience is different and I am OK with that" they have to somehow say how the other experience is as bad as theirs.

Please! PC fanboys treat PC gaming as if its a standard gaming experience when its factually not. You are actually pushing the same false narrative. PC gaming is NOT a standard gaming experience and shot not be compared in such a manner because of this indisputable fact. PC gaming can be better or worse than console gaming. It has the potential to be both and should never be argued to the contrary. Not because console fanboys say dumb shit means that PC gamers have to resort to the same level of stupidity and then in the same breath use the Master Race claim (irony at its best).

So, lets not pretend that the stupidity isn't bidirectional.

Avatar image for mrbojangles25
mrbojangles25

58299

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#20 mrbojangles25
Member since 2005 • 58299 Posts

@Pedro said:
@mrbojangles25 said:
@Pedro said:

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

To do what console fanboys usually do, and try to take an elevated experience they don't have and belittle it down to their petty level. Why? Because they just can't accept that PC offers a better experience than their console of choice.

Instead of saying "my experience is different and I am OK with that" they have to somehow say how the other experience is as bad as theirs.

Please! PC fanboys treat PC gaming as if its a standard gaming experience when its factually not. You are actually pushing the same false narrative. PC gaming is NOT a standard gaming experience and shot not be compared in such a manner because of this indisputable fact. PC gaming can be better or worse than console gaming. It has the potential to be both and should never be argued to the contrary. Not because console fanboys say dumb shit means that PC gamers have to resort to the same level of stupidity and then in the same breath use the Master Race claim (irony at its best).

So, lets not pretend that the stupidity isn't bidirectional.

You're absolutely right, PC gaming is not the average ("standard") experience. PC gaming is the above-average experience.

I never said that console gaming is bad, I just said it is inferior, or "standard" as you said it.

BTW The whole "master race" thing was always said in jest, as far as I know, you (the royal you, not just you specifically) need to stop citing that as a serious thing. I don't think anyone actually believes being a PC gamer makes them a better human. If they do, well on behalf of the Master Race I want to apologize for their behavior; we superior beings need to act more diplomatic and appreciative of the plebs.

Back on track: to use the whole car analogy again: there is nothing wrong with driving and enjoying a Toyota Corolla, but the second you start thinking it's a better experience than driving a Chevy Corvette (or any other car that is better than a Toyota), you are a dipshit. That's a fact. The second you start arguing it's a better vehicle because it gets better mileage and costs less, and somehow those two factors are all that matter, you've lost the debate.

That's ultimately what it comes down to with PC vs Console: cost and exclusives, two bullshit points of debate. I'd be perfectly happy to say "Hey man, I'm happy you enjoy your PS4. Yeah I agree, God of War is a great game! It's pretty awesome you got all that for 400 bucks" but it's never that civil. It's always "HAHAHAHA PC losers don't have exclusives" or "At least I didn't spend 10,000 dollars on something that plays 2D indie games". It's never anything else. Never any arguments of substance. Can't remember the last time I had to stop on this forums and think "hmmm that's a good point" when a console supporter was arguing.

Avatar image for Pedro
Pedro

69425

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#21 Pedro
Member since 2002 • 69425 Posts

@mrbojangles25 said:

You're absolutely right, PC gaming is not the average ("standard") experience. PC gaming is the above-average experience.

I never said that console gaming is bad, I just said it is inferior, or "standard" as you said it.

BTW The whole "master race" thing was always said in jest, as far as I know, you (the royal you, not just you specifically) need to stop citing that as a serious thing. I don't think anyone actually believes being a PC gamer makes them a better human. If they do, well on behalf of the Master Race I want to apologize for their behavior; we superior beings need to act more diplomatic and appreciative of the plebs.

Back on track: to use the whole car analogy again: there is nothing wrong with driving and enjoying a Toyota Corolla, but the second you start thinking it's a better experience than driving a Chevy Corvette (or any other car that is better than a Toyota), you are a dipshit. That's a fact. The second you start arguing it's a better vehicle because it gets better mileage and costs less, and somehow those two factors are all that matter, you've lost the debate.

That's ultimately what it comes down to with PC vs Console: cost and exclusives, two bullshit points of debate. I'd be perfectly happy to say "Hey man, I'm happy you enjoy your PS4. Yeah I agree, God of War is a great game! It's pretty awesome you got all that for 400 bucks" but it's never that civil. It's always "HAHAHAHA PC losers don't have exclusives" or "At least I didn't spend 10,000 dollars on something that plays 2D indie games". It's never anything else. Never any arguments of substance. Can't remember the last time I had to stop on this forums and think "hmmm that's a good point" when a console supporter was arguing.

The problem I am seeing with your response is the absence of the same stupidity coming form PC fanboys. In the same scenario there would be PC fanboys complaining about the resolution, the framerate and or the lack of mods etc consoles. My point on PC gaming not being standard is that the specifications of the systems and the experience is not the same for each PC gamer. Some experience will be worse than console gamers and some will be better and that is all based on the individual expectations independent of specs.

I am all for people enjoying whatever hardware they prefer, so the foolishness that most indulge in on this forum is just that foolishness.

Avatar image for that_old_guy
That_Old_Guy

1233

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#22 That_Old_Guy
Member since 2018 • 1233 Posts

@mrbojangles25: lol, dude, that’s such a fire hazard.

Avatar image for rzxv04
rzxv04

2578

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#23 rzxv04
Member since 2018 • 2578 Posts

@mrbojangles25 said:

I mean when you get right down to it, any console with an operating system could technically be called a "PC in a box" but that is sort of not true because a.) "in a box" is sort of redundant, and b.) there's a lot more to a PC gaming experience than just some parts in a box and an OS.

@Pedro said:

PC are in boxes, consoles are in boxes. UH? What's the point of this thread again?

To do what console fanboys usually do, and try to take an elevated experience they don't have and belittle it down to their petty level. Why? Because they just can't accept that PC offers a better experience than their console of choice.

Instead of saying "my experience is different and I am OK with that" they have to somehow say how the other experience is as bad as theirs.

@that_old_guy said:

@Archangel3371: no they’re in cases. THERES A DIFFERENCE!!!

Unless they are in an actual box

Hahaha! Looks adorable. Better than when building a new one on top of mobo boxes for early boot tests.

Avatar image for mrbojangles25
mrbojangles25

58299

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#24 mrbojangles25
Member since 2005 • 58299 Posts

@that_old_guy said:

@mrbojangles25: lol, dude, that’s such a fire hazard.

Nah man inside that cheap box is a 10,000 dollar liquid nitrogen cooling system :P

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#26  Edited By ronvalencia
Member since 2008 • 29612 Posts

Debunking https://segaretro.org/Sega_Dreamcast/Hardware_comparison claims

https://www.imgtec.com/blog/powervr-25-the-developer-technology-team-cupboards-laid-bare/

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

Avatar image for henrythefifth
henrythefifth

2502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 9

User Lists: 0

#27 henrythefifth
Member since 2016 • 2502 Posts

Dreamcast was indeed hailed as PC in a box in various articles. Windows based os, DirectX support and VGA out for your PC monitor. And inbuilt modem! PC in a box!

PS2 was also thought PC in a box, so much so that USA wanted to restrict its sales to various countries. US government thought the powerful PC hardware inside PS2 could be used for nefarious purposes in those countries, you see, and so wanted to prevent these undesirable elements from getting the super console in their hands! No, true story!

Avatar image for Gaming-Planet
Gaming-Planet

21064

Forum Posts

0

Wiki Points

0

Followers

Reviews: 14

User Lists: 0

#28 Gaming-Planet
Member since 2008 • 21064 Posts

I remember using the phrase "arcade in a box" but never "PC in a box" until the 8th gen.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#29  Edited By Jag85
Member since 2005 • 19543 Posts

@ronvalencia said:

Debunking https://segaretro.org/Sega_Dreamcast/Hardware_comparison claims

https://www.imgtec.com/blog/powervr-25-the-developer-technology-team-cupboards-laid-bare/

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

That blog article is wrong. The Dreamcast chip is the CLX2. The Neon 250 chip is the PMX1. They're two different chips, but are part of the same PowerVR2 series, as explained in another article from the same site:

https://www.imgtec.com/blog/powervr-at-25-the-story-of-a-graphics-revolution/

The Neon 250 was a downgrade compared to the CLX2, and it was poorly optimized for the PC. It does not at all represent what the Dreamcast was capable of. The Dreamcast's CLX2 was superior, and optimized for the Dreamcast. The Dreamcast's Hitachi SH-4 CPU also had powerful geometry processing capabilities for its time. Remember that the Dreamcast was an "arcade in a box", with its specifications almost approaching the Sega Naomi arcade system which cost thousands.

The Dreamcast was more powerful than any PC up until the GeForce 256 in late 1999. And even then, it's debatable which was more powerful between the Dreamcast and the GeForce 256. As far as the actual games are concerned, there weren't any PC games from 1999 pushing as many polygons as what top-tier Dreamcast games were pushing at the time.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#30  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Jag85 said:
@ronvalencia said:

Debunking https://segaretro.org/Sega_Dreamcast/Hardware_comparison claims

https://www.imgtec.com/blog/powervr-25-the-developer-technology-team-cupboards-laid-bare/

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

That blog article is wrong. The Dreamcast chip is the CLX2. The Neon 250 chip is the PMX1. They're two different chips, but are part of the same PowerVR2 series, as explained in another article from the same site:

https://www.imgtec.com/blog/powervr-25-the-developer-technology-team-cupboards-laid-bare/

The Neon 250 was a downgrade compared to the CLX2, and it was poorly optimized for the PC. It does not at all represent what the Dreamcast was capable of. The Dreamcast's CLX2 was superior, and optimized for the Dreamcast. The Dreamcast's Hitachi SH-4 CPU also had powerful geometry processing capabilities for its time. Remember that the Dreamcast was an "arcade in a box", with its specifications almost approaching the Sega Naomi arcade system which cost thousands.

The Dreamcast was more powerful than any PC up until the GeForce 256 in late 1999. And even then, it's debatable which was more powerful between the Dreamcast and the GeForce 256. As far as the actual games are concerned, there weren't any PC games from 1999 pushing as many polygons as what top-tier Dreamcast games were pushing at the time.

Who are you? I directly quoted imgtec's claim

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

https://web.archive.org/web/20000823204755/http://computer.org/micro/articles/dreamcast_2.htm

Dreamcast has PowerVR2 (DC)

Geometry comparison is flawed since the comparison between theoretical vs benchmark.

In terms of theoretical, GeForce 256 has "more than 10 million polygon per second" (NVIDIA claim, when it's programmed, it's way.)

Pentium III Coppermine at 767 Mhz has 984 MFLOPS with dot product benchmarked (not theoretical). http://browser.geekbench.com/geekbench2/843962

segaretro.org claimed Pentium III at 800Mhz has 720 MFLOPS which is wrong

https://www.eurogamer.net/articles/neon250

Videologic have quoted a fill rate of between 200 and 500 MPixels/sec, which is not strictly true. The Neon 250 has a base fill rate of just 125MPixels/sec, as it can render 1 pixel per clock and with a clock speed of 125MHz this becomes the base fill rate.

But as soon as you introduce overdraw in a scene, the "effective" fill rate increases. So with an average overdraw of 2 the fill rate is effectively 250MPixels/sec. If overdraw is an average of 4 then fill rate effectively reaches 500MPixels/sec.

This may seem like a cheat, but it isn't really bearing in mind that any standard chip like a TNT2 will have to render a lot of information that is never seen. In order to keep the framerate up, cards like the TNT2 must increase their fill rate.

The Neon 250 on the other hand will only draw what is visible, and with most games containing significant overdraw, it should have no problem matching other chips on the market.

segaretro.org numbers for Neon 250 is not raw hardware performance.

Dreamcast did NOT deliver superior Quake 3 results over the PC.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#31 Jag85
Member since 2005 • 19543 Posts
@henrythefifth said:

Dreamcast was indeed hailed as PC in a box in various articles. Windows based os, DirectX support and VGA out for your PC monitor. And inbuilt modem! PC in a box!

PS2 was also thought PC in a box, so much so that USA wanted to restrict its sales to various countries. US government thought the powerful PC hardware inside PS2 could be used for nefarious purposes in those countries, you see, and so wanted to prevent these undesirable elements from getting the super console in their hands! No, true story!

Dreamcast actually had two OS, a native Sega OS and Windows CE. The Windows CE was poorly optimized for the Dreamcast, so most games just used the native Sega OS. But otherwise, the built-in modem and VGA output (along with keyboard and mouse support) definitely made it like a "PC in a box". But I would consider the Dreamcast to be more of an "arcade in a box", because of how similar its specifications were to the Sega Naomi arcade system, which cost thousands, yet Sega were able to squeeze it down to a $200 box... but with some cutbacks, and selling the hardware at a loss.

I don't remember anyone referring to the PS2 as some kind of "PC in a box" back then. People were calling the PS2 a "supercomputer" at the time, not "PC" hardware. Supercomputers and PCs are two entirely different things. But of course, as we now know, the "supercomputer" comparisons were overblown. That was just Sony's hype machine greatly exaggerating the PS2's capabilities, which in turn led to panic from politicians.

Anyway, the right answer was in your OP. The PC Engine (TurboGrafx-16) was the first "PC in a box" to be released. It was intended to be a console adaptation of NEC's PC-88 and PC-98. But it was definitely not a "poor man's Master System" as it was significantly more powerful than a Master System. But what really made the PCE a "PC in a box" was how it went through a number of hardware upgrades during its lifespan, including several CD-ROM and RAM expansions. By the end of its lifespan, the PCE Arcade CD-ROM was a 2D powerhouse, with some near-arcade-quality ports of Neo Geo games.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#32  Edited By Jag85
Member since 2005 • 19543 Posts

@ronvalencia said:

I directly quoted imgtec's claim

Again

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

Geometry comparison is flawed since the comparison between theoretical vs benchmark.

In terms of theoretical, GeForce 256 has "more than 10 million polygon per second" (NVIDIA claim, when it's programmed, it's way.)

Pentium III Coppermine at 800Mhz has 984 MFLOPS with dot product benchmarked. http://browser.geekbench.com/geekbench2/843962

segaretro.org claimed Pentium III at 800Mhz has 750 MFLOPS which is wrong

I posted the wrong link earlier. Here's the correct link:

https://www.imgtec.com/blog/powervr-at-25-the-story-of-a-graphics-revolution/

PowerVR Series2, also developed with NEC, was integrated into Sega’s Dreamcast console, which was released in Japan in November 1998, as well as in Sega’s Naomi arcade system. Naomi games found in arcades at the time included House of the Dead 2 from Sega and Power Stone from Capcom. By 1999, NEC had shipped over one million PowerVR 2DC chips to Sega for use in the Dreamcast and Naomi systems. This was a major coup for PowerVR as it had been in competition with 3Dfx for the Sega slot. Sega went on to ship over 10 million Dreamcasts.

There were also PowerVR Series2 products for the PC (the NEC Neon 250 graphics accelerator) and other arcade and gambling systems.

The PMX1 – a prototype of what later became the Neon 250

As you can see here, imgtec states that the Neon 250 was using the PMX1 chip, not the Dreamcast's CLX2 chip. They are two different chips, but both are part of the PowerVR2 series.

According to IEEE, the Dreamcast's Hitachi SH-4 could calculate the geometry and lighting of more than 10 million polygons/sec. But on the rendering side, the CLX2 chip was limited to 7 million polygons/sec.

Even if an 800 MHz Pentium III could process 984 MFLOPS, that's still a lot less than the Dreamcast's Hitachi SH-4 which could process 1.4 GFLOPS.

As for the GeForce 256, its built-in geometry processor was apparently no more powerful than a 800 MHz Pentium III. Which would mean the Dreamcast's SH-4 was a more powerful geometry processor than both the 800 MHz Pentium III and the GeForce 256.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Jag85 said:
@ronvalencia said:

I directly quoted imgtec's claim

Again

The famous Dreamcast console (8MB of memory) used the CLX2. This is the same graphics chip as in the Neon 250 and the NAOMI arcade boards

Geometry comparison is flawed since the comparison between theoretical vs benchmark.

In terms of theoretical, GeForce 256 has "more than 10 million polygon per second" (NVIDIA claim, when it's programmed, it's way.)

Pentium III Coppermine at 800Mhz has 984 MFLOPS with dot product benchmarked. http://browser.geekbench.com/geekbench2/843962

segaretro.org claimed Pentium III at 800Mhz has 750 MFLOPS which is wrong

I posted the wrong link earlier. Here's the correct link:

https://www.imgtec.com/blog/powervr-at-25-the-story-of-a-graphics-revolution/

PowerVR Series2, also developed with NEC, was integrated into Sega’s Dreamcast console, which was released in Japan in November 1998, as well as in Sega’s Naomi arcade system. Naomi games found in arcades at the time included House of the Dead 2 from Sega and Power Stone from Capcom. By 1999, NEC had shipped over one million PowerVR 2DC chips to Sega for use in the Dreamcast and Naomi systems. This was a major coup for PowerVR as it had been in competition with 3Dfx for the Sega slot. Sega went on to ship over 10 million Dreamcasts.

There were also PowerVR Series2 products for the PC (the NEC Neon 250 graphics accelerator) and other arcade and gambling systems.

The PMX1 – a prototype of what later became the Neon 250

As you can see here, imgtec states that the Neon 250 was using the PMX1 chip, not the Dreamcast's CLX2 chip. They are two different chips, but both are part of the PowerVR2 series.

According to IEEE, the Dreamcast's Hitachi SH-4 could calculate the geometry and lighting of more than 10 million polygons/sec. But on the rendering side, the CLX2 chip was limited to 7 million polygons/sec.

Even if an 800 MHz Pentium III could process 984 MFLOPS, that's still a lot less than the Dreamcast's Hitachi SH-4 which could process 1.4 GFLOPS.

As for the GeForce 256, its built-in geometry processor was apparently no more powerful than a 800 MHz Pentium III. Which would mean the Dreamcast's SH-4 was a more powerful geometry processor than both the 800 MHz Pentium III and the GeForce 256.

That's comparing theoretical vs Pentium III at 767Mhz 984 MFLOPS practical benchmark e.g.

PS4's 8 core Jaguar at 1.6 Ghz's with 110 GFLOPS effectiveness almost matched CELL's 200 GFLOPS effectiveness. Hint, effective IPC with SPU is 0.5.

Jaguar can deliver IPC greater than 1 most of the time when compared to CELL's SPU.

IBM is bullshitting with their theoretical GFLOPS.

Dreamcast didn't deliver superior Quake 3 results over the PC.

For the PC, CPU software early-z culling can mitigate GPU overdraw i.e. don't feed unnecessary triangles to the GPU.

https://www.anandtech.com/show/536/3

Radeon 256 can process 30 million triangles per second.

https://www.alternatewars.com/BBOW/Computing/Computing_Power.htm

Intel Pentium III 500 (February 1999)

Transistor Count: 28 million

Clock Speed: 500 MHz

Process Scale: 180 nm

Thermal Design Power: 16 Watts

CTP Benchmark: 1,166 MTOPS

Benchmarks: 1 GFLOP

https://forum.beyond3d.com/threads/115-2-gflops-xbox360-triple-core-cpu.18284/

Xbox1's Pentium III (with 128 KB L2 cache) at 733Mhz has ~3 GFLOPS theoretical.

On theoretical vs theoretical, Pentium III at 800 Mhz beats Hitachi SH-4

https://www.intel.com/content/www/us/en/support/articles/000007250/processors.html#2

Export Compliance Metrics for Intel® Itanium® and Pentium® Processors

Intel Pentium® III processor at 800 Mhz has 1.60 GFLOPS

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#34  Edited By Jag85
Member since 2005 • 19543 Posts

@ronvalencia:

Sorry, but a forum is not a reliable source. But I'll acknowledge your other source (Intel) claiming 1.6 GFLOPS. However, that figure just refers to simple multiplies and adds, not matrix transformations. Whereas the SH-4's 1.4 GFLOPS figure is referring to matrix transformations.

  • Pentium III takes 31 cycles to perform a 4x4 matrix transformation. That's equivalent to 25.8 million vertices/sec (8.6 million triangles/sec) at 800 MHz.
  • Hitachi SH-4 takes just 4 cycles to perform a 4x4 matrix transformation. That's equivalent to 50 million vertices/sec (16 million triangles/sec) at 200 MHz.

So the point still stands that the Dreamcast's SH-4 is a more powerful geometry processor, as it can perform faster matrix transformations. After all, the SH-4 was mainly designed to be a geometry processor, whereas the Pentium III was mainly designed to be a general-purpose CPU.

Quake III is a port, so it's not a good representation of what the Dreamcast is capable of. A better representation would be games originally made for the Dreamcast. There were Dreamcast exclusives pushing far more polygons than Quake III.

The Dreamcast's tiled rendering is much more effective at Z-culling and avoiding overdraw. It only renders the triangles visible on screen, without wasting fillrate or bandwidth on triangles not visible on screen.

As for the Radeon 256, that released in 2000. We're talking about 1998-1999 here.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#35  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Jag85 said:

@ronvalencia:

Sorry, but a forum is not a reliable source. But I'll acknowledge your other source (Intel) claiming 1.6 GFLOPS. However, that figure just refers to simple multiplies and adds, not matrix transformations. Whereas the SH-4's 1.4 GFLOPS figure is referring to matrix transformations.

  • Pentium III takes 31 cycles to perform a 4x4 matrix transformation. That's equivalent to 25.8 million vertices/sec (8.6 million triangles/sec) at 800 MHz.
  • Hitachi SH-4 takes just 4 cycles to perform a 4x4 matrix transformation. That's equivalent to 50 million vertices/sec (16 million triangles/sec) at 200 MHz.

So the point still stands that the Dreamcast's SH-4 is a more powerful geometry processor, as it can perform faster matrix transformations. After all, the SH-4 was mainly designed to be a geometry processor, whereas the Pentium III was mainly designed to be a general-purpose CPU.

Quake III is a port, so it's not a good representation of what the Dreamcast is capable of. A better representation would be games originally made for the Dreamcast. There were Dreamcast exclusives pushing far more polygons than Quake III.

The Dreamcast's tiled rendering is much more effective at Z-culling and avoiding overdraw. It only renders the triangles visible on screen, without wasting fillrate or bandwidth on triangles not visible on screen.

As for the Radeon 256, that released in 2000. We're talking about 1998-1999 here.

FLOPS refers to floating point operation per second.

Matrix math is made up of floating point operation sequence.

The SH-4 is a 32-bit RISC CPU

SH-4 features include:

  • FPU with four floating point multipliers, supporting 32-bit single precision and 64-bit double precision floats
  • 4D floating point dot-product operation
  • 128-bit floating point bus allowing 3.2 GB/sec transfer rate from the data cache
  • 64-bit external data bus with 32-bit memory addressing, allowing a maximum of 4 GB addressable memory with a transfer rate of 800 MB/sec
  • Built-in interrupt, DMA, and power management controllers

Bottlenecks built into SH-4 i.e. 3.2 GB/s is based on 200Mhz x 128 bits calcs while Pentium III 128 bit data cache x 800Mhz clock speed yields 12.8 GB/s.

At 800 Mhz 128 bit data cache bus, Pentium III transfer data 4X quicker when compared to SH-4's 200 Mhz 128 bit data cache bus.

32bit 4x4 matrix would need four clock cycles feed, Pentium III at 800Mhz would be feeding four 128bit data clock cycles for each SH4's 128 bit data clock cycle.

"Pentium III takes 31 cycles to perform a 4x4 matrix transformation" argument is not correct since matrix transformation algorithm is dependant programmer's coding ability.

http://www.cortstratton.org/articles/OptimizingForSSE.php

Pentium III SSE optimizations matrix transformation equivalent reaches down to 17 cycles/vec. Pentium III at 800 Mhz, it's nearly equivalent to 4 cycles at 200 Mhz. Higher clock speed and programming tricks win the day for X86 CPU.

Year 1999, K7 Athlon 3DNow version is down to 16 cycles.

https://en.wikipedia.org/wiki/4D_vector Intel SSE4 and PowerPC VMX-128 gains 4D vector instruction.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#36  Edited By Jag85
Member since 2005 • 19543 Posts

@ronvalencia:

It's not really a bottleneck. A triangle is 40 bytes. With an 800 MB/s external bandwidth, you could transfer up to 20M triangles/sec. And it's worth noting that the PIII 800 also has an external FSB bandwidth of 800 MB/s (64-bit, 100 MHz), so there's no difference between the SH-4 and PIII 800 in that regard.

That shortcut algorithm still makes the PIII 800 slightly slower at matrix transformations than the SH-4. The PIII's 17 cycles at 800 MHz is equivalent to 47M vertices/sec. The SH-4's four cycles at 200 MHz is equivalent to 50M vertices/sec.

As for the Athlon K7, its highest clock rate was 700 MHz. So the K7's 16 cycles at 700 MHz is equivalent to 43M vertices/sec. That's lower than both the PIII 800 and the SH-4.

When it comes to the complete transformation, projection and lighting of polygons, the SH-4 beats them by a wide margin. The SH-4 can transform+project+light a vector in 14 cycles, equivalent to 14M triangles/sec (with benchmarks doing more than 10M triangles/sec). In contrast, the PIII 750 does just 6.7M triangles/sec (equivalent to 7M triangles/sec at 800 MHz), the Athlon 700 does only 3.6M triangles/sec, and even the GeForce 256's T&L unit only does 4.4M triangles/sec.

The point still stands that the Dreamcast's Hitachi SH-4 CPU is a more powerful geometry processor than any CPU or GPU T&L unit on PC back in 1999, beating the PIII 800, the Athlon K7, and the GF256 T&L unit.

P.S. PowerPC VMX128 debuted in 2005 and Intel SSE4 debuted in 2006, so they're well beyond the scope of this discussion. The first CPU to have 4D vector instructions was the Dreamcast's Hitachi SH-4. The 4D vector instructions helped make it highly efficient for geometry calculations.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#37 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Basic time line

Dreamcast was released in November 27, 1998 in Japan and September 9, 1999 in North America. Dreamcast is equipped with PowerVR Series 2

NVDIA GeForce 256 announced on August 31, 1999 and released on October 11, 1999 worldwide. GeForce was more successful since it enabled NVIDIA to rapidly fund further development at a higher pace when compared to Imagination Technologies (PowerVR).

American economic strength backed NVIDIA out paced UK's Imagination Technologies. Qualcomm has displaced Imagination Technologies as the premier handheld GPU. Qualcomm is protected by US government from hostile takeover e.g. blocked Broadcom on national security grounds.

GeForce 256 murdered PowerVR Series 2 which is sold as Neon 250 (Sep 29, 1999) on PC market which is no better than Voodoo 3 or RIVA TNT 2.

September 7, 2000 has GeForce 2 Ultra's release.

April 2000 has Radeon 256 (R100)'s release.

NVIDIA later ripped apart 3DFX and SGI in legal battles.

Intel (supported by Compaq) destroyed DEC in legal battles which in the process destroyed Alpha EV6 RISC CPU line.

Significant amount of ex-DEC hardware employees joined with AMD while ex-DEC software employees joined Microsoft (Windows NT). AMD's K7 Athlon EV6 engaged in a Ghz race war against Intel which nuked bystander alternative CPU instruction set such as MIPS and Alpha (South Koreas' Samsung attempted continue Alpha evolution) from desktop market.

AMD's K8 Athlon 64 (x86-64) single handily defeated both IBM PowerPC 970 (PowerPC 64bit desktop) and Intel Itanium (IA-64) from the desktop market.

PowerPC 970 was IBM's last attempt to control PC market during 64bit desktop PC migration phase.

NVIDIA defended PC's graphics/vector math leadership by releasing's GeForce 8800 GTX a few weeks ahead of PS3's launch window. NVIDIA engaged in a marketing war against IBM's CELL.

UK government didn't defend Imagination Technologies and it's ultimately dismantled, GPU IP sold to China (state owned) while MIPS IP sold to US company. US government intervene, blocked China and forced MIPS IP to be sold back to US company.

UK government didn't defend ARM Ltd from foreign takeover.

Resurgent AMD re-starts Ghz race (e.g. 5 Ghz) and core wars against Intel.

Due to GPUs entry into national security related parts vendor, AMD and NVIDIA are protected by US government. NVIDIA's GPUs are part of F-35 Block 4 avionics upgrade which is moving away from FPGA based processing.

So i was right PC didn't hold a candle to the Dreamcast in 1998,the NV 256 came 1 year latter and was more expensive than the complete dreancast on a time were a damn celeron tower was more than $700 dollars stand alone.

And most people didn't have one in fact almost every one on PC didn't have a N256 in 1999 or 2000.

So yeah he was wrong and i was right.

The argument isn't about what destroyed the DC.

@ronvalencia said:

That's comparing theoretical vs Pentium III at 767Mhz 984 MFLOPS practical benchmark e.g.

PS4's 8 core Jaguar at 1.6 Ghz's with 110 GFLOPS effectiveness almost matched CELL's 200 GFLOPS effectiveness. Hint, effective IPC with SPU is 0.5.

Jaguar can deliver IPC greater than 1 most of the time when compared to CELL's SPU.

IBM is bullshitting with their theoretical GFLOPS.

Dreamcast didn't deliver superior Quake 3 results over the PC.

For the PC, CPU software early-z culling can mitigate GPU overdraw i.e. don't feed unnecessary triangles to the GPU.

https://www.anandtech.com/show/536/3

Radeon 256 can process 30 million triangles per second.

1-

Hahahaha NO... Look Cell a 2001 CPU design beating Jaguar with just 5 SPE.

But what does the PS4 has do to with anything here?

Great, one **** up PC game means all dreamcast games were like that,yeah it is funny because dreamcast games before the PS2 arrived were basically bar for graphics this isn't even debatable it was like that.

And again pretty much NO ONE owned a damn NV 256 on 1999,so again most PC were not even close to the dreamcast by a land slide.

By the way 30 million triangles which equal 10 million polygons is the same shit,as 1 polygon requires 3 triangles so if your point was to try to make seen the NV 256 stronger is a loss cause.

@ronvalencia said:

FLOPS refers to floating point operation per second.

The SH-4 is a 32-bit RISC CPU

SH-4 features include:

  • FPU with four floating point multipliers, supporting 32-bit single precision and 64-bit double precision floats
  • 4D floating point dot-product operation
  • 128-bit floating point bus allowing 3.2 GB/sec transfer rate from the data cache
  • 64-bit external data bus with 32-bit memory addressing, allowing a maximum of 4 GB addressable memory with a transfer rate of 800 MB/sec
  • Built-in interrupt, DMA, and power management controllers

Bottlenecks built into SH-4

"Pentium III takes 31 cycles to perform a 4x4 matrix transformation" argument is not correct since matrix transformation algorithm is dependant programmer's coding ability.

http://www.cortstratton.org/articles/OptimizingForSSE.php

Pentium III SSE optimizations matrix transformation equivalent reaches down to 17 cycles/vec. Pentium III at 800 Mhz, it's nearly equivalent to 4 cycles at 200 Mhz. Higher clock speed and programming tricks win the day for X86 CPU.

Stop it man i payed $800 for a damn HP celeron 266mhz 1998 a damn pentium 3 was out of this world.

I see that you are doing allot of mixing here.

First the Pentium 3 500 mhz wasn't introduce at the same time as 800mzh version,coppermine came at the end of 1999 and super expensive man.

In fact a by August 1999 Intel reduced the price of katmai by 40% a Pentium 3 550 mhz before the price drop was freaking $658 in quantities of 1,000 in other words the price for retailers not consumer so you can imagine how expensive was coppermine 800mhz when it arrived latter between December and may 2000,hell the PS2 was already out when all coppermine CPU finally came out.

But lets be real here a 800mzh Pentium 3 has freaking 4 times the clock speed of a SH4 so basically for the task as hand the SH4 beat the crap out of intel clock by clock without even trying.

Is nice to see you hide on a CPU that came almost in 2000 as the Coppermine 800mhz came in December 1999 the DC was a year old already,oh and it was like $700 dollars in quantities of 1,000 which mean it was over $800 or more in store for average consumer that is the price of 4 damn dreamcast,and you still missing a complete PC, specially the other costly part the GPU.

The Gforce 256 was like $300 in 1999 so yeah we are talking over $1,000 in just CPU and GPU to beat a $199 console and you still missing hdd,power supply,case,mouse,keyboard speakers,memory.

You are comparing apples to oranges like always.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#38  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Basic time line

Dreamcast was released in November 27, 1998 in Japan and September 9, 1999 in North America. Dreamcast is equipped with PowerVR Series 2

NVDIA GeForce 256 announced on August 31, 1999 and released on October 11, 1999 worldwide. GeForce was more successful since it enabled NVIDIA to rapidly fund further development at a higher pace when compared to Imagination Technologies (PowerVR).

American economic strength backed NVIDIA out paced UK's Imagination Technologies. Qualcomm has displaced Imagination Technologies as the premier handheld GPU. Qualcomm is protected by US government from hostile takeover e.g. blocked Broadcom on national security grounds.

GeForce 256 murdered PowerVR Series 2 which is sold as Neon 250 (Sep 29, 1999) on PC market which is no better than Voodoo 3 or RIVA TNT 2.

September 7, 2000 has GeForce 2 Ultra's release.

April 2000 has Radeon 256 (R100)'s release.

NVIDIA later ripped apart 3DFX and SGI in legal battles.

Intel (supported by Compaq) destroyed DEC in legal battles which in the process destroyed Alpha EV6 RISC CPU line.

Significant amount of ex-DEC hardware employees joined with AMD while ex-DEC software employees joined Microsoft (Windows NT). AMD's K7 Athlon EV6 engaged in a Ghz race war against Intel which nuked bystander alternative CPU instruction set such as MIPS and Alpha (South Koreas' Samsung attempted continue Alpha evolution) from desktop market.

AMD's K8 Athlon 64 (x86-64) single handily defeated both IBM PowerPC 970 (PowerPC 64bit desktop) and Intel Itanium (IA-64) from the desktop market.

PowerPC 970 was IBM's last attempt to control PC market during 64bit desktop PC migration phase.

NVIDIA defended PC's graphics/vector math leadership by releasing's GeForce 8800 GTX a few weeks ahead of PS3's launch window. NVIDIA engaged in a marketing war against IBM's CELL.

UK government didn't defend Imagination Technologies and it's ultimately dismantled, GPU IP sold to China (state owned) while MIPS IP sold to US company. US government intervene, blocked China and forced MIPS IP to be sold back to US company.

UK government didn't defend ARM Ltd from foreign takeover.

Resurgent AMD re-starts Ghz race (e.g. 5 Ghz) and core wars against Intel.

Due to GPUs entry into national security related parts vendor, AMD and NVIDIA are protected by US government. NVIDIA's GPUs are part of F-35 Block 4 avionics upgrade which is moving away from FPGA based processing.

So i was right PC didn't hold a candle to the Dreamcast in 1998,the NV 256 came 1 year latter and was more expensive than the complete dreancast on a time were a damn celeron tower was more than $700 dollars stand alone.

And most people didn't have one in fact almost every one on PC didn't have a N256 in 1999 or 2000.

So yeah he was wrong and i was right.

The argument isn't about what destroyed the DC.

@ronvalencia said:

That's comparing theoretical vs Pentium III at 767Mhz 984 MFLOPS practical benchmark e.g.

PS4's 8 core Jaguar at 1.6 Ghz's with 110 GFLOPS effectiveness almost matched CELL's 200 GFLOPS effectiveness. Hint, effective IPC with SPU is 0.5.

Jaguar can deliver IPC greater than 1 most of the time when compared to CELL's SPU.

IBM is bullshitting with their theoretical GFLOPS.

Dreamcast didn't deliver superior Quake 3 results over the PC.

For the PC, CPU software early-z culling can mitigate GPU overdraw i.e. don't feed unnecessary triangles to the GPU.

https://www.anandtech.com/show/536/3

Radeon 256 can process 30 million triangles per second.

1-

Hahahaha NO... Look Cell a 2001 CPU design beating Jaguar with just 5 SPE.

But what does the PS4 has do to with anything here?

Great, one **** up PC game means all dreamcast games were like that,yeah it is funny because dreamcast games before the PS2 arrived were basically bar for graphics this isn't even debatable it was like that.

And again pretty much NO ONE owned a damn NV 256 on 1999,so again most PC were not even close to the dreamcast by a land slide.

By the way 30 million triangles which equal 10 million polygons is the same shit,as 1 polygon requires 3 triangles so if your point was to try to make seen the NV 256 stronger is a loss cause.

@ronvalencia said:

FLOPS refers to floating point operation per second.

The SH-4 is a 32-bit RISC CPU

SH-4 features include:

  • FPU with four floating point multipliers, supporting 32-bit single precision and 64-bit double precision floats
  • 4D floating point dot-product operation
  • 128-bit floating point bus allowing 3.2 GB/sec transfer rate from the data cache
  • 64-bit external data bus with 32-bit memory addressing, allowing a maximum of 4 GB addressable memory with a transfer rate of 800 MB/sec
  • Built-in interrupt, DMA, and power management controllers

Bottlenecks built into SH-4

"Pentium III takes 31 cycles to perform a 4x4 matrix transformation" argument is not correct since matrix transformation algorithm is dependant programmer's coding ability.

http://www.cortstratton.org/articles/OptimizingForSSE.php

Pentium III SSE optimizations matrix transformation equivalent reaches down to 17 cycles/vec. Pentium III at 800 Mhz, it's nearly equivalent to 4 cycles at 200 Mhz. Higher clock speed and programming tricks win the day for X86 CPU.

Stop it man i payed $800 for a damn HP celeron 266mhz 1998 a damn pentium 3 was out of this world.

I see that you are doing allot of mixing here.

First the Pentium 3 500 mhz wasn't introduce at the same time as 800mzh version,coppermine came at the end of 1999 and super expensive man.

In fact a by August 1999 Intel reduced the price of katmai by 40% a Pentium 3 550 mhz before the price drop was freaking $658 in quantities of 1,000 in other words the price for retailers not consumer so you can imagine how expensive was coppermine 800mhz when it arrived latter between December and may 2000,hell the PS2 was already out when all coppermine CPU finally came out.

But lets be real here a 800mzh Pentium 3 has freaking 4 times the clock speed of a SH4 so basically for the task as hand the SH4 beat the crap out of intel clock by clock without even trying.

Is nice to see you hide on a CPU that came almost in 2000 as the Coppermine 800mhz came in December 1999 the DC was a year old already,oh and it was like $700 dollars in quantities of 1,000 which mean it was over $800 or more in store for average consumer that is the price of 4 damn dreamcast,and you still missing a complete PC, specially the other costly part the GPU.

The Gforce 256 was like $300 in 1999 so yeah we are talking over $1,000 in just CPU and GPU to beat a $199 console and you still missing hdd,power supply,case,mouse,keyboard speakers,memory.

You are comparing apples to oranges like always.

You are forgetting "business PC" expense can be claimed from income tax. It's effectively a government subsidiary for the PC industry.

NVIDIA GeForce 256 leadership fuelled rapid GPU R&D at a faster pace when compared to PowerVR

Jaguar is effectively a warm over K8 Athlon 64 with K6's dual instruction per cycle decode rate. Jaguar has 128 bit SIMD SSE4 dot product instruction but 3DNow 64bit SIMD already supports 64bit dot product instruction.

Jaguar (K16) is *not* new. Programming optimization guide for Jaguar is similar to K8

K8 has 128bit FADD and 64bit FMUL SIMD units. Three instructions per cycle decode rate.

K10 has 128 bit FADD and 128 bit FMUL SIMD units. Three instructions per cycle decode rate.

K16 (Jaguar) has 128 bit FADD and 128 bit FMUL SIMD units. Two instructions per cycle decode rate. <-------- K6 instructions per cycle decode rate.

K17 (Zen v1) has two 128 bit FADD and two 128 bit FMUL SIMD units. Four instructions per cycle decode rate + four decoded instructions per cycle from decode cache. Without marketing, Ryzen would be K17

Jaguar was a cut-down AMD64 CPU designed for low cost mobile and embedded markets.

AMD K7 Duron was the original CPU for original Xbox https://www.ign.com/articles/2015/07/02/podcast-unlocked-201-xbox-bosses-past-and-present-share-stories-secrets (27:45 mark).

Bill Gates intervened to save Intel's market dominance. Removing Bill Gates from Microsoft = removes Intel bias.

The original XBOX was designed with AMD (not just AMD cpu, but with AMD engineers) and then 1-2 days before the unveiling, Andy Grove (intel) talked with Bill Gates and the decision was made to use an Intel cpu.

K7 Duron includes 3DNow Pro instruction set with dot product instruction (for 3D).

Pentium III "Coppermine" S370 at 700 Mhz released on Oct 25, 1999 <---- single chip version http://www.cpu-world.com/CPUs/Pentium-III/Intel-Pentium%20III%20700%20-%20RB80526PY700256%20(BX80526F700256%20-%20BX80526F700256E).html

Pentium III Slot 1 at 733 Mhz released on Oct 25, 1999

Pentium III 800EB Slot 1 at 800 Mhz released on Dec 20, 1999

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Loading Video...

https://www.techspot.com/article/657-history-of-the-gpu-part-3/

There was also LMA (Lightspeed Memory Architecture) support -- basically Nvidia's version of HyperZ -- for culling pixels that would end up hidden behind others on screen (Z occlusion culling) as well as compressing and decompressing data to optimize use of bandwidth (Z compression).

GeForce 3 has Z-buffer compression which segaretro.org failed to mentioned for the original Xbox

https://www.anandtech.com/show/767/2

Z-compression, Z-occlusion culling and Crossbar memory controller

By implementing 4:1 lossless data compression, Z-buffer bandwidth is reduced by a factor of four.

Original Xbox's 5.3GB/s raw memory bandwidth turns into

21.2 GB/s for frame buffer and Z-buffer for the entire flat 64 MB external memory

31.8 GB/s for textures and Z-buffer for the entire flat 64 MB external memory

That's apples to apples comparison.

https://www.anandtech.com/show/570/9

https://www.anandtech.com/show/422/4

Beating Dreamcast's Quake 3 in both frame rates and resolution results. 1024 x 768p 4:3 ratio entry level HD resolution!

Try again

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#39  Edited By tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

You are forgetting "business PC" expense can be claimed from income tax. It's effectively a government subsidiary for the PC industry.

NVIDIA GeForce 256 leadership fuelled rapid GPU R&D at a faster pace when compared to PowerVR

Jaguar is effectively a warm over K8 Athlon 64 with K6's dual instruction per cycle decode rate. Jaguar has 128 bit SIMD SSE4 dot product instruction but 3DNow 64bit SIMD already supports 64bit dot product instruction.

Jaguar (K16) is *not* new. Programming optimization guide for Jaguar is similar to K8

K8 has 128bit FADD and 64bit FMUL SIMD units. Three instructions per cycle decode rate.

K10 has 128 bit FADD and 128 bit FMUL SIMD units. Three instructions per cycle decode rate.

K16 (Jaguar) has 128 bit FADD and 128 bit FMUL SIMD units. Two instructions per cycle decode rate. <-------- K6 instructions per cycle decode rate.

K17 (Zen v1) has two 128 bit FADD and two 128 bit FMUL SIMD units. Four instructions per cycle decode rate + four decoded instructions per cycle from decode cache. Without marketing, Ryzen would be K17

Jaguar was a cut-down AMD64 CPU designed for low cost mobile and embedded markets.

AMD K7 Duron was the original CPU for original Xbox https://www.ign.com/articles/2015/07/02/podcast-unlocked-201-xbox-bosses-past-and-present-share-stories-secrets (27:45 mark).

Bill Gates intervened to save Intel's market dominance. Removing Bill Gates from Microsoft = removes Intel bias.

The original XBOX was designed with AMD (not just AMD cpu, but with AMD engineers) and then 1-2 days before the unveiling, Andy Grove (intel) talked with Bill Gates and the decision was made to use an Intel cpu.

K7 Duron includes 3DNow Pro instruction set with dot product instruction (for 3D).

Pentium III "Coppermine" S370 at 700 Mhz released on Oct 25, 1999 <---- single chip version http://www.cpu-world.com/CPUs/Pentium-III/Intel-Pentium%20III%20700%20-%20RB80526PY700256%20(BX80526F700256%20-%20BX80526F700256E).html

Pentium III Slot 1 at 733 Mhz released on Oct 25, 1999

Pentium III 800EB Slot 1 at 800 Mhz released on Dec 20, 1999

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Loading Video...

https://www.techspot.com/article/657-history-of-the-gpu-part-3/

There was also LMA (Lightspeed Memory Architecture) support -- basically Nvidia's version of HyperZ -- for culling pixels that would end up hidden behind others on screen (Z occlusion culling) as well as compressing and decompressing data to optimize use of bandwidth (Z compression).

GeForce 3 has Z-buffer compression which segaretro.org failed to mentioned for the original Xbox

https://www.anandtech.com/show/767/2

Z-compression, Z-occlusion culling and Crossbar memory controller

By implementing 4:1 lossless data compression, Z-buffer bandwidth is reduced by a factor of four.

Original Xbox's 5.3GB/s raw memory bandwidth turns into

21.2 GB/s for frame buffer and Z-buffer for the entire flat 64 MB external memory

31.8 GB/s for textures and Z-buffer for the entire flat 64 MB external memory

That's apples to apples comparison.

https://www.anandtech.com/show/570/9

https://www.anandtech.com/show/422/4

Beating Dreamcast's Quake 3 in both frame rates and resolution results. 1024 x 768p 4:3 ratio entry level HD resolution!

Try again

I don't have to try again intel pentium 3 CPU at 800mhz came a year after the dreamcast was out,and was completely prohibitive probably close to $900 dollars the 256 again 300 and you dude still missing a complete PC still.

The SH4 was 200mhz and the DC $199.

That video you posted there is one more of the tons of bullshit videos that Nvdia has made with each line of GPU that had no AI no nothing and that never ever true graphics achieve even close.

I am not a damn kid dude i saw Nvidia rise from the aches,i saw when they bought 3dFX,i was here before that as well.

Avatar image for lundy86_4
lundy86_4

61478

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#40 lundy86_4
Member since 2003 • 61478 Posts

Holy shit, everybody just dipset this thread now. All is lost.

Avatar image for zaryia
Zaryia

21607

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#41  Edited By Zaryia
Member since 2016 • 21607 Posts

Damn Ronvelencia kills threads.

let me put a TLDR: PC is the best.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#42 Jag85
Member since 2005 • 19543 Posts

@ronvalencia said:

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Again, that's not a bottleneck. The Dreamcast's CLX2 GPU has an on-chip cache (tile buffer and Z-sorter) with a bandwidth of up to 15 GB/s. Which means the CLX2 doesn't need to render a Z-buffer or even a framebuffer in external memory, nor does it have to waste its external memory bandwidth on overdrawn polygons or textures that aren't visible on screen. Its 800 MB/s external GPU memory bandwidth can be dedicated entirely to polygons and textures that are actually visible on the screen.

The Dreamcast's CLX2 has a separate bus with direct access to the GD-ROM drive, from where it can load the textures directly to the VRAM without bothering the CPU. The GeForce 256 has no direct access to the CD-ROM or hard drive, but it requires the CPU to load the textures from the hard drive to the CPU RAM, and then the CPU has to transfer the textures through the CPU-GPU transmission bus to the GF256. Since the CPU-GPU transmission bus between a PIII 800 and GF256 is limited to 800 MB/s (same as the Dreamcast), that means its effective texture bandwidth is actually less than the Dreamcast, since the Dreamcast has higher texture compression (8:1 ratio) and doesn't need to use its CPU-GPU transmission bus to transfer textures (thus the DC's CPU-GPU transmission bus can be dedicated to transferring more polygons).

Beating Dreamcast's Quake 3 in both frame rates and resolution results.

Like I already told you, Quake III is a poor representation of the Dreamcast's capabilities. While it was a solid port, it does not represent anything close to the extent of the Dreamcast's capabilities in any way, shape or form. Quake III barely taps into a fraction of the Dreamcast's power. It's the Dreamcast exclusives that push the hardware near its limits, with Dreamcast exclusives pushing far more polygons than Quake III.

Quake III renders up to just 10,000 polygons per scene. In contrast, DOA2 on the Dreamcast renders over 70,000 textured polygons per scene at 60 fps (over 4.2 million textured polygons/sec), including up to 52,000 polygons for the background and over 9,000 polygons per character... Yes, even a single DOA2 character uses almost as many polygons as an entire Quake III scene! DOA2 is pushing seven times as many textured polygons as Quake III. It's not even close. DOA2 wipes the floor with Quake III in terms of graphics.

Avatar image for Telekill
Telekill

12061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#43 Telekill
Member since 2003 • 12061 Posts

Steam boxes?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#44  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Jag85 said:
@ronvalencia said:

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Again, that's not a bottleneck. The Dreamcast's CLX2 GPU has an on-chip cache (tile buffer and Z-sorter) with a bandwidth of up to 15 GB/s. Which means the CLX2 doesn't need to render a Z-buffer or even a framebuffer in external memory, nor does it have to waste its external memory bandwidth on overdrawn polygons or textures that aren't visible on screen. Its 800 MB/s external GPU memory bandwidth can be dedicated entirely to polygons and textures that are actually visible on the screen.

The Dreamcast's CLX2 has a separate bus with direct access to the GD-ROM drive, from where it can load the textures directly to the VRAM without bothering the CPU. The GeForce 256 has no direct access to the CD-ROM or hard drive, but it requires the CPU to load the textures from the hard drive to the CPU RAM, and then the CPU has to transfer the textures through the CPU-GPU transmission bus to the GF256. Since the CPU-GPU transmission bus between a PIII 800 and GF256 is limited to 800 MB/s (same as the Dreamcast), that means its effective texture bandwidth is actually less than the Dreamcast, since the Dreamcast has higher texture compression (8:1 ratio) and doesn't need to use its CPU-GPU transmission bus to transfer textures (thus the DC's CPU-GPU transmission bus can be dedicated to transferring more polygons).

Beating Dreamcast's Quake 3 in both frame rates and resolution results.

Like I already told you, Quake III is a poor representation of the Dreamcast's capabilities. While it was a solid port, it does not represent anything close to the extent of the Dreamcast's capabilities in any way, shape or form. Quake III barely taps into a fraction of the Dreamcast's power. It's the Dreamcast exclusives that push the hardware near its limits, with Dreamcast exclusives pushing far more polygons than Quake III.

Quake III renders up to just 10,000 polygons per scene. In contrast, DOA2 on the Dreamcast renders over 70,000 textured polygons per scene at 60 fps (over 4.2 million textured polygons/sec), including up to 52,000 polygons for the background and over 9,000 polygons per character... Yes, even a single DOA2 character uses almost as many polygons as an entire Quake III scene! DOA2 is pushing seven times as many textured polygons as Quake III. It's not even close. DOA2 wipes the floor with Quake III in terms of graphics.

Dreamcast's power on geometry is a only a single aspect with raster render which didn't benefit games like Quake III (large texture pusher and high resolution render which kills tiling).

On chip tile cache is nearly useless without robust drivers e.g. both Vega and Pascal has on chip micro-tile caches, but AMD's version is not as good as Nvidia's version.

X1X GPU has hyper-fast 2 MB render cache, but it's results are not consistent when compared to GTX 1070's transparent to the programmer immediate mode tile cache render with hyper-fast 2 MB L2 cache. X1X comes with higher memory bandwidth when compared to PS4 Pro as a fall back.

NVIDIA's GeForce 256's texture bandwidth could reach compressed 15,600 MB/s across the entire VRAM storage not just a small tile cache.

The main reason for PS4's large flat and fast memory storage design is from Sony's game programmer's survey who doesn't like manual tile cache render while on the PC, Nvidia has "smart drivers" that can break up traditional immediate mode render into tile cache render for the programmer.

PC's IDE controllers during mid to late 90s are already DMA (direct memory address) mode NOT PIO mode. PC IDE with DMA feature gains SCSI's memory read/write smarts.

Textures stored in S3TC format are transferred to the GPU with S3TC which is assimilated into DirectX7 standard. AGP 4X's 1.06 GB/s bandwidth is up to 6.36 GB/s with S3TC (textures), hence your 800 MB/s argument is bunked.

GeForce 256 includes support for AGP 4X With Fast Writes.

Fast Writes improves all writes from the CPU to the graphics chip including: · All 2D operations · Operations involving writing to the frame buffer or sending any data to the graphics chip. · Loading textures in Direct3D® into local memory. · Writing push buffers to graphics local memory – this is where most of the performance boost is generated.

GeForce 256 also added support for Dot Product (Dot3) Bump Mapping which is high geometry amplification tricks via texture data method.

https://www.anandtech.com/show/399/4

October 25, 1999, Pentium III E "Coppermine" Slot 1 arrives with 133 Mhz FSB, hence 1.06 GB/s bandwidth, higher than 800 MB/s. Coppermine's i820 chipset also comes with Rambus DIMMs.

Pentium III EB at 800Mhz has 133 Mhz FSB, hence 1.06 GB/s I/O transfer rates.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#45  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:

I don't have to try again intel pentium 3 CPU at 800mhz came a year after the dreamcast was out,and was completely prohibitive probably close to $900 dollars the 256 again 300 and you dude still missing a complete PC still.

The SH4 was 200mhz and the DC $199.

That video you posted there is one more of the tons of bullshit videos that Nvdia has made with each line of GPU that had no AI no nothing and that never ever true graphics achieve even close.

I am not a damn kid dude i saw Nvidia rise from the aches,i saw when they bought 3dFX,i was here before that as well.

Both PC (income tax credit) and Dreamcast (game sales) has subsidy methods.

NVIDIA's GPUs are being used for AI and Deep Learning e.g. Telsa vehicles to F-35 Block 4 avionics upgrade.

Loading Video...

with Unreal Engine 4.

Loading Video...

https://www.eetimes.com/document.asp?doc_id=1331727#

Nvidia revealed Wednesday at its GPU Technology Conference that Toyota will use Nvidia’s Drive PX AI automotive platform to power advanced autonomous driving systems planned for market introduction.

Telsa went in-house AI chip design. Telsa disliked their secret sauce being leaked to other car manufactures.

Microsoft and Sony teams up for AI related business venture besides Playstation related cloud solution.

Avatar image for uninspiredcup
uninspiredcup

58900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 86

User Lists: 2

#46 uninspiredcup
Member since 2013 • 58900 Posts

Probably the Dreamcast I think. It made a conscientious decision to be more developer friendly. It also features online features and was the having games like Unreal Tournament and Halflife (canceled bu easily found online) developed for it.

Avatar image for Jag85
Jag85

19543

Forum Posts

0

Wiki Points

0

Followers

Reviews: 219

User Lists: 0

#47 Jag85
Member since 2005 • 19543 Posts

@ronvalencia said:
@Jag85 said:
@ronvalencia said:

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Again, that's not a bottleneck. The Dreamcast's CLX2 GPU has an on-chip cache (tile buffer and Z-sorter) with a bandwidth of up to 15 GB/s. Which means the CLX2 doesn't need to render a Z-buffer or even a framebuffer in external memory, nor does it have to waste its external memory bandwidth on overdrawn polygons or textures that aren't visible on screen. Its 800 MB/s external GPU memory bandwidth can be dedicated entirely to polygons and textures that are actually visible on the screen.

The Dreamcast's CLX2 has a separate bus with direct access to the GD-ROM drive, from where it can load the textures directly to the VRAM without bothering the CPU. The GeForce 256 has no direct access to the CD-ROM or hard drive, but it requires the CPU to load the textures from the hard drive to the CPU RAM, and then the CPU has to transfer the textures through the CPU-GPU transmission bus to the GF256. Since the CPU-GPU transmission bus between a PIII 800 and GF256 is limited to 800 MB/s (same as the Dreamcast), that means its effective texture bandwidth is actually less than the Dreamcast, since the Dreamcast has higher texture compression (8:1 ratio) and doesn't need to use its CPU-GPU transmission bus to transfer textures (thus the DC's CPU-GPU transmission bus can be dedicated to transferring more polygons).

Beating Dreamcast's Quake 3 in both frame rates and resolution results.

Like I already told you, Quake III is a poor representation of the Dreamcast's capabilities. While it was a solid port, it does not represent anything close to the extent of the Dreamcast's capabilities in any way, shape or form. Quake III barely taps into a fraction of the Dreamcast's power. It's the Dreamcast exclusives that push the hardware near its limits, with Dreamcast exclusives pushing far more polygons than Quake III.

Quake III renders up to just 10,000 polygons per scene. In contrast, DOA2 on the Dreamcast renders over 70,000 textured polygons per scene at 60 fps (over 4.2 million textured polygons/sec), including up to 52,000 polygons for the background and over 9,000 polygons per character... Yes, even a single DOA2 character uses almost as many polygons as an entire Quake III scene! DOA2 is pushing seven times as many textured polygons as Quake III. It's not even close. DOA2 wipes the floor with Quake III in terms of graphics.

Dreamcast's power on geometry is a only a single aspect with raster render which didn't benefit games like Quake III (large texture pusher and high resolution render which kills tiling).

On chip tile cache is nearly useless without robust drivers e.g. both Vega and Pascal has on chip micro-tile caches, but AMD's version is not as good as Nvidia's version.

Again, Quake III is a port. Ports don't mean anything. According to your logic, inferior PC ports of Dreamcast games must mean that PC was inferior to Dreamcast. And for the record, Shenmue was pushing way more detailed textures than Quake III.

Dreamcast exclusives were optimised for tiled rendering. That's how Dreamcast exclusives were able to push far more textured polygons than PC games back in '99.

NVIDIA's GeForce 256's texture bandwidth could reach compressed 15,600 MB/s across the entire VRAM storage not just a small tile cache.

No, it can't. Its effective texture bandwidth is limited by the CPU-GPU bus speed, as the CPU needs to transfer the textures to the GPU in the first place. As you've kind of acknowledged below...

Textures stored in S3TC format are transferred to the GPU with S3TC which is assimilated into DirectX7 standard. AGP 4X's 1.06 GB/s bandwidth is up to 6.36 GB/s with S3TC (textures), hence your 800 MB/s argument is bunked.

You haven't debunked my argument, but only proved my point. The CPU needs to transfer the textures to the GPU through a transmission bus, as I said above. And even if it could transfer the equivalent of up to 6.36 GB/s textures with compression, the Dreamcast's higher 8:1 compression ratio means it could transfer the equivalent of up to 6.4 GB/s, which still puts the Dreamcast slightly ahead in this regard.

GeForce 256 also added support for Dot Product (Dot3) Bump Mapping which is high geometry amplification tricks via texture data method.

The Dreamcast's CLX2 chip already supported Dot3 bump mapping, before the GeForce 256.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#48  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Jag85 said:
@ronvalencia said:
@Jag85 said:
@ronvalencia said:

The bottleneck for Dreamcast is the external memory bandwidth for the GPU i.e. 800 MB/s memory bandwidth

GeForce 256 support S3TC texture compression http://www.vgamuseum.info/index.php/technologies/item/870-s3tc-dxtcsegaretro.org omitted this feature. GeForce 256 has dedicated 2600 MB/s video memory bandwidth.

S3TC can compress with up to 6:1 ratio, hence GeForce 256's texture bandwidth could reach compressed 15,600 MB/s effective. https://en.wikipedia.org/wiki/S3_Texture_Compression

segaretro.org's claimed compressed 6000 MB/s for the Dreamcast.

Again, that's not a bottleneck. The Dreamcast's CLX2 GPU has an on-chip cache (tile buffer and Z-sorter) with a bandwidth of up to 15 GB/s. Which means the CLX2 doesn't need to render a Z-buffer or even a framebuffer in external memory, nor does it have to waste its external memory bandwidth on overdrawn polygons or textures that aren't visible on screen. Its 800 MB/s external GPU memory bandwidth can be dedicated entirely to polygons and textures that are actually visible on the screen.

The Dreamcast's CLX2 has a separate bus with direct access to the GD-ROM drive, from where it can load the textures directly to the VRAM without bothering the CPU. The GeForce 256 has no direct access to the CD-ROM or hard drive, but it requires the CPU to load the textures from the hard drive to the CPU RAM, and then the CPU has to transfer the textures through the CPU-GPU transmission bus to the GF256. Since the CPU-GPU transmission bus between a PIII 800 and GF256 is limited to 800 MB/s (same as the Dreamcast), that means its effective texture bandwidth is actually less than the Dreamcast, since the Dreamcast has higher texture compression (8:1 ratio) and doesn't need to use its CPU-GPU transmission bus to transfer textures (thus the DC's CPU-GPU transmission bus can be dedicated to transferring more polygons).

Beating Dreamcast's Quake 3 in both frame rates and resolution results.

Like I already told you, Quake III is a poor representation of the Dreamcast's capabilities. While it was a solid port, it does not represent anything close to the extent of the Dreamcast's capabilities in any way, shape or form. Quake III barely taps into a fraction of the Dreamcast's power. It's the Dreamcast exclusives that push the hardware near its limits, with Dreamcast exclusives pushing far more polygons than Quake III.

Quake III renders up to just 10,000 polygons per scene. In contrast, DOA2 on the Dreamcast renders over 70,000 textured polygons per scene at 60 fps (over 4.2 million textured polygons/sec), including up to 52,000 polygons for the background and over 9,000 polygons per character... Yes, even a single DOA2 character uses almost as many polygons as an entire Quake III scene! DOA2 is pushing seven times as many textured polygons as Quake III. It's not even close. DOA2 wipes the floor with Quake III in terms of graphics.

Dreamcast's power on geometry is a only a single aspect with raster render which didn't benefit games like Quake III (large texture pusher and high resolution render which kills tiling).

On chip tile cache is nearly useless without robust drivers e.g. both Vega and Pascal has on chip micro-tile caches, but AMD's version is not as good as Nvidia's version.

Again, Quake III is a port. Ports don't mean anything. According to your logic, inferior PC ports of Dreamcast games must mean that PC was inferior to Dreamcast. And for the record, Shenmue was pushing way more detailed textures than Quake III.

Dreamcast exclusives were optimised for tiled rendering. That's how Dreamcast exclusives were able to push far more textured polygons than PC games back in '99.

NVIDIA's GeForce 256's texture bandwidth could reach compressed 15,600 MB/s across the entire VRAM storage not just a small tile cache.

No, it can't. Its effective texture bandwidth is limited by the CPU-GPU bus speed, as the CPU needs to transfer the textures to the GPU in the first place. As you've kind of acknowledged below...

Textures stored in S3TC format are transferred to the GPU with S3TC which is assimilated into DirectX7 standard. AGP 4X's 1.06 GB/s bandwidth is up to 6.36 GB/s with S3TC (textures), hence your 800 MB/s argument is bunked.

You haven't debunked my argument, but only proved my point. The CPU needs to transfer the textures to the GPU through a transmission bus, as I said above. And even if it could transfer the equivalent of up to 6.36 GB/s textures with compression, the Dreamcast's higher 8:1 compression ratio means it could transfer the equivalent of up to 6.4 GB/s, which still puts the Dreamcast slightly ahead in this regard.

GeForce 256 also added support for Dot Product (Dot3) Bump Mapping which is high geometry amplification tricks via texture data method.

The Dreamcast's CLX2 chip already supported Dot3 bump mapping, before the GeForce 256.

Loading Video...

For Dead or Alive 2, the original Xbox beaten Dreamcast version in polygon count despite original Xbox's cut-down 128 KB L2 cache Intel Pentium III "Coppermine" at 733Mhz with 133 Mhz FSB.

Original Xbox delivers more polygons and hair when compared to Dreamcast version.

Remember, Xbox's cut-down Pentium III Coppermine CPU is still driving the polygon's control points as per game logic simulation.

The original Xbox's Intel Pentium III "Coppermine" 733Mhz still has 133 Mhz FSB just like year 1999 gaming PCs with full Intel Pentium III "Coppermine" 733Mhz with 256 KB L2 cache.

Xbox NV2A's dual vertex shaders are the programmable T&L hardware units, hence it's about twice the throughput over the original GeForce 256.

GeForce 256's fix function quad pixel pipelines evolved into programmable quad pixel shader pipelines in Geforce NV2A.

S3TC remained similar between GeForce 256 (DirectX 7) to Xbox's GeForce NV2A (DirectX 8).

Original Xbox has a shared 6.4 GB/s memory architecture between CPU (with 1.06 GB/s FSB link) and GPU. No tiling complexity, just flat memory storage design.

Year 1999 gaming PC (with Rambus DRAM-400) has similar aggregate memory bandwidth with the original Xbox

2.656 GB/s VRAM + 1.6 GB/s system RAM = 4.26 GB/s

https://www.anandtech.com/show/399/6 Rambus DRAM-800 along with Pentium III Coppermine in October 1999

2.656 GB/s VRAM + 3.2 GB/s system RAM = 5.86‬ GB/s

Year 1999, Gaming PC''s texture theoretical bandwidth

1.06 GB/s AGP texture stream from system ram x 6 via S3TC = 4.24 GB/s

+

2.655 GB/s from 32GB VRAM x 6 via S3TC = 15.936 GB/s

Total texture aggregate bandwidth: 20.176 GB/s with no tile cache difficulty.

32 MB VRAM on GeForce 256 reduce the need for texture streaming from AGP 4X bus. Textures can stay resident on 32MB VRAM

PC's AGP 4X bus is not a bottleneck since GeForce 3 Ti and 4 Ti can still smack down Dreamcast

Hint: X86 instruction length goes down to 8 bit (1 byte) when compared to SuperH-4's 16 bit (two bytes) instruction length.

X86 has the superior code density when compared to SuperH instruction set.

X86 CPU can store more instructions into it's cache when compared to SuperH-4.

X86 CPU can transfer more instructions into it's cache when compared to SuperH-4.

X86's variable (compressed) length instructions (CISC advantage) is converted into RISC like fix length instructions after the decoder stage.

Pentium III has superior out-of-order processing and branch prediction when compare over in-orderprocessing SuperH-4 CPU. Efficient FLOPS needs good branch logic performance.

All HPC servers from IBM Power 9, AMD Zen to Intel Xeon Skylake X CPUs has powerful out-of-order processing and branch predictions to control matrix vector math processing.

IBM dumped their in-order processing Power 6 adventure in HPC markets returned to uber fat out-of-order processing CPU designs with Power 7. This is IBM's position on in-order CELL design with refutation by the out-of-order processing Power 7.

On data transfers, one shouldn't compare X86's compressed instruction length against a larger fix length instruction from a pure RISC CPU. There's a reason for CISC-RISC hybrid X86 CPU's superiority over pure RISC CPUs.

-----------

As for textures

Loading Video...

Where's the texture superiority?

PC has FarCry 1 (forest simulation killer app) which GeForce 256 can run.

Loading Video...

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#49 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Both PC (income tax credit) and Dreamcast (game sales) has subsidy methods.

NVIDIA's GPUs are being used for AI and Deep Learning e.g. Telsa vehicles to F-35 Block 4 avionics upgrade.

with Unreal Engine 4.

https://www.eetimes.com/document.asp?doc_id=1331727#

Telsa went in-house AI chip design. Telsa disliked their secret sauce being leaked to other car manufactures.

Microsoft and Sony teams up for AI related business venture besides Playstation related cloud solution.

NO you don't have any subsidy when you go to the damn store to buy a Pentium 3 in 1999,the damn thing was $700+ in quantities of 1,000 which is a retailer price.

If you wanted a damn 800mhz Pentium 3 on December 1999 you would have pay $800 or $900 dollars probably for just the damn CPU.

Nvidia always maked demos which supposedly showed what the hardware was capable off,in reality it was demos with no AI nothing going that never were match by actual graphics in games.

The SH4 was 200mhz and you need it a 800mhz pentium 3 to beat it? that is not even comparable with the price of those in 1999 you could buy 4 dreamcast systems.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#50  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Both PC (income tax credit) and Dreamcast (game sales) has subsidy methods.

NVIDIA's GPUs are being used for AI and Deep Learning e.g. Telsa vehicles to F-35 Block 4 avionics upgrade.

with Unreal Engine 4.

https://www.eetimes.com/document.asp?doc_id=1331727#

Telsa went in-house AI chip design. Telsa disliked their secret sauce being leaked to other car manufactures.

Microsoft and Sony teams up for AI related business venture besides Playstation related cloud solution.

NO you don't have any subsidy when you go to the damn store to buy a Pentium 3 in 1999,the damn thing was $700+ in quantities of 1,000 which is a retailer price.

If you wanted a damn 800mhz Pentium 3 on December 1999 you would have pay $800 or $900 dollars probably for just the damn CPU.

Nvidia always maked demos which supposedly showed what the hardware was capable off,in reality it was demos with no AI nothing going that never were match by actual graphics in games.

The SH4 was 200mhz and you need it a 800mhz pentium 3 to beat it? that is not even comparable with the price of those in 1999 you could buy 4 dreamcast systems.

A business or contractor or job related expense can offset income tax payments to the government. Job related expense happens to be a PC.

----

Xbox's Pentium III E at 733 Mhz with 133 Mhz front-side bus has beaten Dreamcast on Dead or Alive 2's polygon count. Higher Mhz speeds up other operations like branch logic e.g. intersect or compare test.

For 4D instruction, PowerPC and x86 CPUs waits for VMX-128 and SSE4 respectively. Late 1990s was during the Ghz race, hence Intel and AMD focused on clock speed race.

Btw, XBO has slightly higher polygon power over PS4, but XBO was defeated by PS4 in most game's resolution and fps. PS4 follows the original Xbox concept.

Dreamcast is dead by year 2001 along with SuperH-4 CPU's 4D instruction "superiority". Microsoft didn't pick SuperH CPU for Xbox and MS knows Dreamcast hardware.

https://lwn.net/Articles/647636/

The SuperH architecture is so dense that a 2009 research paper [PDF] plotted it ahead of every architecture other than x86, x86_64, and CRIS v32.

AMD has reached high clock speed mastery and added SSE4 for Jaguar while SuperH is long dead in desktop CPUs.