Greatest disparity in console hardware per gen?

  • 107 results
  • 1
  • 2
  • 3
Avatar image for achilles614
achilles614

5310

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#101  Edited By achilles614
Member since 2005 • 5310 Posts

@The_Last_Ride said:
@Heirren said:

@The_Last_Ride:

Absolutely not. The best visuals on n64 are far better than anything on psx.

Catridges had less information than CD's dude

How is that relevant to pixel processing? Having a bigger RAM/ROM doesn't give me a better core.

Of course it can hold more audio/visual assets, but that (as your point) wasn't entirely clear from this quote chain.

@tormentos What are you on about? FLOPS as a unit of measurement were practically created solely for comparing DIFFERENT architectures. It's basically saying "how many times can this chip multiply/add/do some operations per second" on the damn wiki page for flops there is even a table organized by FLOPS for different supercomputers (with different architectures).

Avatar image for SecretPolice
SecretPolice

44167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#102 SecretPolice
Member since 2007 • 44167 Posts

Last gen, the Wii was the weakest of weak sauce comparatively to the other two consoles.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#103 tormentos
Member since 2003 • 33784 Posts

@HalcyonScarlet said:

Yes, I fully imagine PS3's RSX is better than the NV25. Don't know if you meant the Xboxs NV2A. Especially given that the RSX came out 5 years later.

The Xbox performed to a better standard more consistently. It probably wasn't just down to the GPU. It had more RAM, bigger storage and the benefits of a HDD over the GC. And I fully acknowledge that the Xbox probably had the worst CPU out of the three.

In my experience games just looked better on the Xbox. Also, the GC and Wii GPU seems incapable of performing any AA ability.

Even if the Xbox wasn't vastly more powerful, it showed better performance in more games.

But as for the PS2, no, it was well known for being significantly weaker than the other two.

Exactly and even the RSX could not replicated that let alone the xbox,some things were better on PS2 most were not,but again the xbox didn't land on the same day or month the PS2 did,the PS2 came almost 2 years after so it was a given it had to carry stronger hardware.

It sure did but on that time most multiplatforms were quick cash grabs in fact this site slam several for been nothing but PS2 ports with better AA by multiplatforms it wasn't used much the xbox power.

Yes it had more ram double,but at the same time it didn't have embedded ram like the PS2 had on its GPU,and is quite something because the xbox didn't have that until the 360 when MS chose that route with EDRAM and the ESRAM but sony was already there and the bus connecting the memory to GPU was huge 2560bit bus which was unheard of,giving the PS2 48GB/s for its GPU something not even the PS3 had.

Yes it showed but never what MS claimed it would do,did you see multiplatforms games on xbox been double the frames as on PS2.? The xbox didn't have the power for that the Nv2A was also a little over hyped by Nvidia and MS just like Nvidia did with the RSX as well,the xbox never had 2 times the in game performance of the PS2 when all was say and done.

It was a more capable machine no denying that and far easier to code to as well,but far weaker or like a dude say here 10 times is a total joke.

@roboed said:

@tormentos: All those games were screw ups on the xbox one just like bayonetta was on the ps3

I'm using your own reasoning

Also shadow of mordoor was patched to be 1080p on the xbox

I can also list games that run better on the xbox one

How do you explain that?

http://www.anandtech.com/bench/product/1126?vs=1127

No unless you want to ignore this,the xbox one has a weaker GPU and no way to make the gap from,it also has a cumbersome memory structure that hurt it.

The PS3 was actually more powerful than the Xbox 360 unlike,but the xbox 360 was far easier to code for the PS3 was a nightmare.

Shadow of Mordor was never patched and is not 1080p is 900p and stayed that way,it was this site who claimed it was 1080p but DF confirmed it was 900p and that has less foliage to.

Name them most of them are not on par are lower in resolution,and have been known to be buggy or crap jobs like Resident Evil,in reality any game that performs worse on PS4 is a screw up game on sony platform period there is no way other than that because the xbox one is weaker and will always be,like ACU which was hold back on PS4 resolution wise,a game which MS had a deal of millions of copies to be included with their machine,when the last one showed bast oceans with storms and multiple ships fighting and was 1080p on PS4.

Any game that run at the same resolution on both machines will be faster on PS4 else something was screw up on PS4,there is no way around that.

@jereb31 said:

Uhhh, FLOPS is most definitely a measure of the performance of hardware, only 1 though.

You could say that hardware between different architecture works differently, which is totally true. But to get a measure of how each GPU would work in both architecture's you would probably start by comparing the FLOPS of each.

Good example is the one you used earlier, Xenos chip vs the RSX. The RSX had more FLOP than the Xenos but the system(Important part there) worked better with the Xenos.

If you could swap the two chips between the two systems without any issues, then the Xenos would perform likely worse in the PS3 and the RSX would perform better in the Xbox.

No dude comparing flops between the PS4 and xbox one is totally accurate because both machines have the same CPU and a GPU from the same line,son under those conditions it is an indication,on different hardware that is not the case because regardless of one having more than another they work in different ways to achieve a goal so this GPU 1, does 10 flops while GPU 2 does 3 flops,but it take 5 flops of performance to run B Program on the first one while on GPU 2 it takes 1 flop of performance to run that same B Programs so in then end you can do more with those 3 flops than with 10.

Actually if you use a Xenos on the PS3 the gap would be even bigger because the Xenos is capable of running the process that were offload to Cell because the RSX was to weak,so if you run those on Xenos you still have a very capable CPU but now free in invest in even more things and Cell was very capable graphics wise unlike the Xenon.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#104 tormentos
Member since 2003 • 33784 Posts

@achilles614 said:
@The_Last_Ride said:
@Heirren said:

@The_Last_Ride:

Absolutely not. The best visuals on n64 are far better than anything on psx.

Catridges had less information than CD's dude

How is that relevant to pixel processing? Having a bigger RAM/ROM doesn't give me a better core.

Of course it can hold more audio/visual assets, but that (as your point) wasn't entirely clear from this quote chain.

@tormentos What are you on about? FLOPS as a unit of measurement were practically created solely for comparing DIFFERENT architectures. It's basically saying "how many times can this chip multiply/add/do some operations per second" on the damn wiki page for flops there is even a table organized by FLOPS for different supercomputers (with different architectures).

No it wasn't it was created to to measure the performance on each processor,comparing them over different hardware is useless and pointless and this has been proven over the years multiple times,just like core speed is pretty useless when it comes to compare performance to.

I already proved my point and not with wikipedia i use Anandtech which showed 2 GPUs one with more flops than the other,and show the one with less flops actually outperforming the one with less.

Flops mean crap the Intel pentium 3 CPU on xbox had more flops than the emotion engine but for 3D the EE walked all over that Intel Pentium 3 regardless of having less flops and less than half the speed.

There are cases and cases for example Cell has more flops than the Xenon,and Cell is more powerful,yet the RSX has more flops than the Xenos yet is much weaker.

Avatar image for shibua
Shibua

467

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#105 Shibua
Member since 2014 • 467 Posts

X1 >>> PS4>Wii U

X1 the only console with 1080p@60FPS exclusives

Forza 5 & 6

Halo 5

Avatar image for roboed
roboed

79

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#106  Edited By roboed
Member since 2014 • 79 Posts

@tormentos: So what I get from your reply is the xbox one has a weaker gpu and cumbersome memory so its worse

But the ps3 had a weaker gpu and cumbersome memory but it was better than the 360. It all makes sense

Also if a xbox one game performed better than the ps4 game its because Microsoft paid them to do it

Brilliant it all makes sense now. Thanks for clearing it up

I think what your saying is you cant always judge an overall power of a system buy raw specs alone. Its a balance of it all together.

I completely agree with that

Avatar image for jereb31
Jereb31

2025

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#107 Jereb31
Member since 2015 • 2025 Posts

@tormentos said:
@achilles614 said:
@The_Last_Ride said:
@Heirren said:

@The_Last_Ride:

Absolutely not. The best visuals on n64 are far better than anything on psx.

Catridges had less information than CD's dude

How is that relevant to pixel processing? Having a bigger RAM/ROM doesn't give me a better core.

Of course it can hold more audio/visual assets, but that (as your point) wasn't entirely clear from this quote chain.

@tormentos What are you on about? FLOPS as a unit of measurement were practically created solely for comparing DIFFERENT architectures. It's basically saying "how many times can this chip multiply/add/do some operations per second" on the damn wiki page for flops there is even a table organized by FLOPS for different supercomputers (with different architectures).

No it wasn't it was created to to measure the performance on each processor,comparing them over different hardware is useless and pointless and this has been proven over the years multiple times,just like core speed is pretty useless when it comes to compare performance to.

I already proved my point and not with wikipedia i use Anandtech which showed 2 GPUs one with more flops than the other,and show the one with less flops actually outperforming the one with less.

Flops mean crap the Intel pentium 3 CPU on xbox had more flops than the emotion engine but for 3D the EE walked all over that Intel Pentium 3 regardless of having less flops and less than half the speed.

There are cases and cases for example Cell has more flops than the Xenon,and Cell is more powerful,yet the RSX has more flops than the Xenos yet is much weaker.

I feel like we are getting tangled up a bit on something else. Are we sure we are not making comparisons with something analogous of the AMD and Intel cpu's. Wherein the old AMD's cpu's coudl perform more operations per clock cycle to the intel (or vice versa I can't remember).

Because I'm quite certain FLOPS is exactly as he is describing it, it's a measure of many Floating Point Arithmetic Operations a processor can perform per second, nothing else. No software involved, no architecture of what it is installed with.

It's kind of like the performance value you gauge the hardware by before you put all the hardware together.

Need to acknowledge that FLOPS is not the be all benchmark for how powerful a GPU/CPU is. It is purely a mathematically derived theoretical maximum of how many floating point operations it can perform a second, if it did that and only that.

Avatar image for achilles614
achilles614

5310

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#108 achilles614
Member since 2005 • 5310 Posts

@jereb31 said:

I feel like we are getting tangled up a bit on something else. Are we sure we are not making comparisons with something analogous of the AMD and Intel cpu's. Wherein the old AMD's cpu's coudl perform more operations per clock cycle to the intel (or vice versa I can't remember).

Because I'm quite certain FLOPS is exactly as he is describing it, it's a measure of many Floating Point Arithmetic Operations a processor can perform per second, nothing else. No software involved, no architecture of what it is installed with.

It's kind of like the performance value you gauge the hardware by before you put all the hardware together.

Need to acknowledge that FLOPS is not the be all benchmark for how powerful a GPU/CPU is. It is purely a mathematically derived theoretical maximum of how many floating point operations it can perform a second, if it did that and only that.

It should also be noted that how well a given piece of hardware performs is dependent on the benchmark workload, or the specific tasks that must be done in the work load (add, multiply, floating-point add or multiply). So it is entirely possible to have a piece of hardware with higher flops lose to a lower flop hardware with the right benchmark, there are also other factors such as the memory system speed which can change things. But it's still a good comparison metric that shouldn't be downplayed, especially is graphics hardware/gaming hardware which heavily uses FP.