World of Tanks, PS4 Pro vs Scorpio

  • 115 results
  • 1
  • 2
  • 3
Avatar image for deactivated-5a30e101a977c
deactivated-5a30e101a977c

5970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#101 deactivated-5a30e101a977c
Member since 2006 • 5970 Posts

@tormentos said:

2-Oh the PS4 is 135Gb/s?

How would the unified system architecture and 8GB GDDR5 RAM help in making a better game? Gilray stated that, “It means we don’t have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing#pgkfTfljUo64IFpQ.99

Hahahahahaa Owned....

That's purely theoretical, it never achieves that... Which you know, because I know you've seen the presentation from Sony on this topic with this slide

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#102 tormentos
Member since 2003 • 33784 Posts

@FastRobby said:
@tormentos said:

2-Oh the PS4 is 135Gb/s?

Hahahahahaa Owned....

That's purely theoretical, it never achieves that... Which you know, because I know you've seen the presentation from Sony on this topic with this slide

No the theoretical one is 176GB/s not 172...lol

That was sony showing how the CPU could eat bandwidth in a disproportional way,there is a part where it use 0 bandwidth for CPU which clearly is an example since NO DEVICE WORKS WITH 0 BANDWIDTH.

I told Ronvalencia that ages ago when he try to use that crap against the PS4.

That is one of ronvalencias famous misinterpret charts,just like he was wrong about ESRAM,jit compression,Tiled resources and all the crap he use to hype for the xbox one which FAILED to close the gap.

And just like more recently the HYPOCRITE hyped FP16 and even dare say Scorpio exceeded a 1080GTX using FP16 only to downplay it now because he jumped the gun and Scorpio GPU doesn't have that function,and he was wrong about Vega and Ryzen to.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#103  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@FastRobby said:
@tormentos said:

2-Oh the PS4 is 135Gb/s?

Hahahahahaa Owned....

That's purely theoretical, it never achieves that... Which you know, because I know you've seen the presentation from Sony on this topic with this slide

No the theoretical one is 176GB/s not 172...lol

That was sony showing how the CPU could eat bandwidth in a disproportional way,there is a part where it use 0 bandwidth for CPU which clearly is an example since NO DEVICE WORKS WITH 0 BANDWIDTH.

I told Ronvalencia that ages ago when he try to use that crap against the PS4.

That is one of ronvalencias famous misinterpret charts,just like he was wrong about ESRAM,jit compression,Tiled resources and all the crap he use to hype for the xbox one which FAILED to close the gap.

And just like more recently the HYPOCRITE hyped FP16 and even dare say Scorpio exceeded a 1080GTX using FP16 only to downplay it now because he jumped the gun and Scorpio GPU doesn't have that function,and he was wrong about Vega and Ryzen to.

You have used a dev source who didn't design their own modern 3D engine.

For Oddworld: New ‘n’ Tasty (built on Unity3D), both XBO and PS4 version has the same resolution.

http://www.eurogamer.net/articles/digitalfoundry-2015-oddworld-new-n-tasty-face-off

As promised, native 1080p is deployed across both PS4 and Xbox One.

....

At its lowest point, Microsoft's console hands in 35fps metrics, compared with around 40fps on the PS4.

....

our budget gaming PC, featuring a Core i3 4130 and a GTX 750 Ti, again we find that the game runs very smoothly at 1080p with max settings

----------------

If PS4 has 172 Gbps effective bandwidth for game actual content, PS4 should have rivaled 7950 in multi-platform games.

The gap between 135 GB/s and 176 GB/s didn't disappeared since low level memory handling will need non-data book keeping operations i.e. GDDR5 is not register level SRAM storage with 0 latency.

R9-290X's 320 Gbps ---> 263 Gbps effective bandwidth

7950's 240 Gbps ---> ~182 Gbps effective bandwidth

172 Gbps effective bandwidth vs 176 physical bandwidth = 97 percent efficiency which rivals L2 cache which is absurd for GDDR5 i.e. NVIDIA couldn't even do this for their Pascal era GDDR5.

Sony >>>>>>>>>>> 3D Engine modder e.g. Stewart Gilray

Sony's statements on PS4's effective memory bandwidth still stands.

@tormentos said:

That is one of ronvalencias famous misinterpret charts,just like he was wrong about ESRAM,jit compression,Tiled resources and all the crap he use to hype for the xbox one which FAILED to close the gap.

Bullshit. My W5000 example shows 1.3 TFLOPS GCN with 153.6 Gbps didn't match 7850.

jit compression = memory bandwidth related

tile resource = memory bandwidth related

ESRAM = memory bandwidth related

Nothing has addressed the ALU bound issues..

You are wrong with your assertion. My W5000 example shows I didn't support other Lems POV on matching XBO and PS4 GPU performance. My point was XBO can exceed 7770's results but will not match 7850 i.e. W5000 falls between 7770 and 7850.

My GPU ranking with low end GPUs from low to high.

7770/R7-250X

W5000

7850

R7-265

You can't handle the middle ground POV. You have omitted my W5000 example view point.

@tormentos said:

And just like more recently the HYPOCRITE hyped FP16 and even dare say Scorpio exceeded a 1080GTX using FP16 only to downplay it now because he jumped the gun and Scorpio GPU doesn't have that function,and he was wrong about Vega and Ryzen to.

Wrong, I placed Scorpio's GPU solution between RX-480 OC/R9-390X and Vega 10/GTX 1080 Ti and it was in speculation thread.

Facts

1. Scorpio's "more than 320 GB/s" rivals GTX 1080's 320 GB/s on physical memory bandwidth. Revealed Scorpio has 326 GB/s physical memory bandwidth.

2. Scorpio's Forza 6 wet track result was better than GTX 1070/Fury X class results i.e. no dips bellow 60 fps. DF still maintained GTX 1070/Fury X class assignment for Scorpio regardless of Scorpio's Forza 6 wet track result being better than GTX 1070/Fury X class results.

Later GTX 1070 cards has improved memory bandwidth e.g. GDDR5-9000.

-----------------

Since you have converted this topic into personality war

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#104 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

1. Since current games already used lesser datatypes like Fp16 and integers, any additional math operation and data element processing will additional memory bandwidth.

32 bit processing occurs on the chip. AMD GCN version 1.1 TMU and ROPS units still has 16bit datatype read and write modes.

Tegra X1's issues has nothing to do with with AMD GCN version 1.1.

Sony 1st party dev >>>>>>>>>>>>>>>>> YOU. You have misapplied TX1's issues for AMD GCN version 1.1.

Scorpio vs R9-390X

R9-390X has 1 MB L2 cache which is less than Scorpio's 2 MB L2 cache. Scorpio can hold more data on chip when compared to R9-390X.

GM200 and GP104 has 2 MB L2 caches. GP102 has 3 MB L2 cache.

For games like Forza Horzion 3, this is important tiled forward plus rendering 3D engines.

R9-290/R9-290X has 320 GBps physical memory bandwidth with 263 GBps effective memory bandwidth, hence R9-390X's 384 GBps would be around 315.6GBps.

R9-390X's 384 GBps doesn't have DCC, hence it's effective memory bandwidth is about 315 GBps which is less than Scorpio's 326 physical memory bandwidth and effective memory bandwidth 336 GB/s with compression.

R9-390X doesn't have DCC to restore the lost memory bandwidth inefficiencies.

Scorpio's Fozra 6 wet track results >>>>>>>>>>>>>>> R9-390X.

I run my GTX 1080 Ti down to 6.4 TFLOPS while keeping memory bandwidth as is, the result is still sustained 4K/60 fps. So, Fack you.

Show me Hawaii XT GPU that delivers 1:1 physical vs effective memory bandwidth benchmark! Prove It.

RX-480 (5.83 TFLOPS)'s 264 GB/s effective bandwidth, hence very close to R9-290 (4.8 TFLOPS) and R9-290X (5.6 TFLOPS) results.

3. Shared memory PS4 vs Non-shared memory R7-265

http://www.eurogamer.net/articles/digitalfoundry-2016-we-built-a-pc-with-playstation-neo-gpu-tech

For Witcher 3, SW battlefront and SF5, there's very minimal difference between shared memory PS4 vs non-shared memory R7-265.

Normal PC doesn't have Fusion links.

@tormentos said:

You are a dishonest poster who hyped FP16 and claim Scorpio exceed it a 1080GTX,now you down play it because you were wrong and scorpio doesn't have you are a JOKE LEMMING.

Polaris has DCC and pack subword Fp16 math.

http://gpuopen.com/using-sub-dword-addressing-on-amd-gpus-with-rocm/

With up to 65536 numbers per 16bit datatype, the data throughput has doubled but the the number of math operation remained the the same.

DirectX12 Shader Model 5.1 hasn't enabled the above feature.

135 GBps effective memory bandwidth for PS4 which is 76 percent efficient which is inline with PC GCN GPUs.

Try again Sony ass kisser

This is all i will address from your shitty hypocrite post.

The PS4 uses unified ram just like scorpio so if the PS4 has 76% memory efficiency so does Scorpio so downplaying the R390X 384GB/s using effective memory means that also apply to scorpio lemming,and this is why i call you a HYPOCRITE you want to pretend 100% of scorpio bandwidth is for GPU and pretend is it 100% effective as well BOTH totally wrong.

So the PS4 loss 24% of its bandwidth?

Lets apply that loss to Scorpio as well.

326GB/s - 24% = 247Gb/s.. < this is scorpio bandwidth after applying the same loss on PS4.

But hey lets pretend scorpio has 85% effective memory which i am sure it DOESN'T.

326GB/s - 15% = 277GB/s

Even if we pretend scorpio has 85% effective bandwidth (which it doesn't) still is far lower than the 315Gb/s you claim (pulled from your ass) that the R390X has.

And i will end this post showing how Skewed and biased your opinions are in favor of MS to the point where you invent pitfalls for AMD GPU in order to downplay them vs Scorpio and how you simple chose to ignore conveniently what you know is true.

R9-390X's 384 GBps doesn't have DCC, hence it's effective memory bandwidth is about 315 GBps which is less than Scorpio's 326 physical memory bandwidth and effective memory bandwidth 336 GB/s with compression.

This ^^ is you on this very post i am quoting,let me highly the wrong and stupid part about this ^^ post which prove without shadow of a doubt that you are a MS suck up.

1-Here ^^ you claim the R390X has 315GB/s because it doesn't have DCC,not having DCC doesn't decrease your bandwidth DCC help compress data and saves bandwidth which is different but now having it will not magically reduce your bandwidth.

2-Look how you use effective memory on the R390X to claim it drops from 384GB/s to 315GB/s but for scorpio you claim 326GB/s + DCC =336GB/s there is a problem with this << Scorpio effective bandwidth isn't 100% like the PS4 it has what 76% effective memory?

3-YOU ARE COMPARING ONCE AGAIN 315GB/S OF THE R390X WHICH IS SOLELY GPU MEMORY NOT SHARED VS SCORPIO 320GB/S WHICH IT SHAREDDDDDDDDDDD..

DO YOU KNOW WHAT THE WORD SHARED MEANS? SCORPIO DOESN'T FREAKING HAS 320GBS IT CAN'T BECAUSE THE DAMN FREAKING CPU AND SYSTEM NEED BANDWIDTH TO OPERATE ON XBOX ONE 30GB/S ARE RESERVE FALLOWING THAT SAME LINE SCORPIO ONLY HAS 296GB/S AND THAT IS WITHOUT TAKING INTO ACCOUNT EFFECTIVE MEMORY BANDWIDTH LOSS..

YOU ARE A DISHONEST POSTER.

Scorpio will always be more powerful than the Pro but your hyping and constant on purpose dishonest arguments just to help scorpio are a joke.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#105 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

You have used a dev source who didn't design their own modern 3D engine.

For Oddworld: New ‘n’ Tasty (built on Unity3D), both XBO and PS4 version has the same resolution.

http://www.eurogamer.net/articles/digitalfoundry-2015-oddworld-new-n-tasty-face-off

----------------

If PS4 has 172 Gbps effective bandwidth for game actual content, PS4 should have rivaled 7950 in multi-platform games.

R9-290X's 320 Gbps ---> 263 Gbps effective bandwidth

7950's 240 Gbps ---> ~182 Gbps effective bandwidth

Sony >>>>>>>>>>> 3D Engine modder e.g. Stewart Gilray

Sony's statements on PS4's effective memory bandwidth still stands.

According to your shitty argument the xbox one has more bandwidth than the PS4...lol

But by miracle of GOD the ps4 has less bandwidth but is able to push 40% more performance over less bandwidth is a miracle right...hahahahaa

The PS4 can't rival a 7950 you blind lemming because it doesn't freaking has the same amount of stream processors and CU in fact the 7950 doesn't freaking has 176GB/s it has 240GB/s on a 384bit bus.

How can you claim a 18 CU 256 bit 176GB/s PS4 should rival a 28CU 384bit bus 240Gb/s is freaking out of this world is down right moronic.

The PS4 could have 360Gb/s with 18CU it would never touch or get close to a 7950 power on GCN GPU come from stream processors inside CU the more you have the more performance you get.

Oh look you are using Oddworld as some kind of proof that the PS4 doesn't use 172GB/s?

On a technical level we see the PS4 and Xbox One versions of Oddworld featuring the same standard of graphical quality as the PC game running at maximum settings

That game is not demanding from your own link dishonest poster that game maximum settings on PC = console...lol

So much for using that game as some kind of benchmark so yeah Oddworld developer >>>> YOU who are you?

Hahahahahahaaaaaaaaaaaaaaaaaa

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#106 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

You have used a dev source who didn't design their own modern 3D engine.

For Oddworld: New ‘n’ Tasty (built on Unity3D), both XBO and PS4 version has the same resolution.

http://www.eurogamer.net/articles/digitalfoundry-2015-oddworld-new-n-tasty-face-off

----------------

If PS4 has 172 Gbps effective bandwidth for game actual content, PS4 should have rivaled 7950 in multi-platform games.

R9-290X's 320 Gbps ---> 263 Gbps effective bandwidth

7950's 240 Gbps ---> ~182 Gbps effective bandwidth

Sony >>>>>>>>>>> 3D Engine modder e.g. Stewart Gilray

Sony's statements on PS4's effective memory bandwidth still stands.

1. According to your shitty argument the xbox one has more bandwidth than the PS4...lol

2. But by miracle of GOD the ps4 has less bandwidth but is able to push 40% more performance over less bandwidth is a miracle right...hahahahaa

3. The PS4 can't rival a 7950 you blind lemming because it doesn't freaking has the same amount of stream processors and CU in fact the 7950 doesn't freaking has 176GB/s it has 240GB/s on a 384bit bus.

4. How can you claim a 18 CU 256 bit 176GB/s PS4 should rival a 28CU 384bit bus 240Gb/s is freaking out of this world is down right moronic.

5. The PS4 could have 360Gb/s with 18CU it would never touch or get close to a 7950 power on GCN GPU come from stream processors inside CU the more you have the more performance you get.

6. Oh look you are using Oddworld as some kind of proof that the PS4 doesn't use 172GB/s?

On a technical level we see the PS4 and Xbox One versions of Oddworld featuring the same standard of graphical quality as the PC game running at maximum settings

7. That game is not demanding from your own link dishonest poster that game maximum settings on PC = console...lol

8. So much for using that game as some kind of benchmark so yeah Oddworld developer >>>> YOU who are you?

Hahahahahahaaaaaaaaaaaaaaaaaa

1. XBO has higher memory write bandwidth over PS4 which is useless for ALU bound issues..

2. It's NOT magical. Your shit brain can't grasp the graphics pipeline dependency. Memory write performance also dependent on math results being finished in a timely manner i.e. you failed the simple graphics pipeline concept.

3. 172 Gbps number is garbage. You are supporting the POV from 3rd party 3D engine modder (using Unity 3D) against 1st party Sony. LOL.

172 Gbps effective bandwidth vs 176 physical bandwidth = 97 percent efficiency which rivals L2 cache which is absurd for GDDR5 i.e. NVIDIA couldn't even do this for their Pascal era GDDR5 and NVIDIA needs kick-ass DCC tech.

4.

The lessons from RX-480's superior 1.15X IPC improvements over GCN1.1 and 5.83 TFLOPS was gimped by effective memory bandwidth.

GDDR5 is not register level SRAM, L1 and L2 SRAM caches.

5. PS4 didn't have 360 GBps external memory which is useless argument.

6. I don't need too since PS4's Witcher 3, Battlefront and SF5 results matches PC's R7-265 results.

7. Oddworld is a cross generation game with PS3 considerations. Oddworld is not demanding yet, it dips to 35 (XBO/1080p) and 40 fps(PS4/1080p). It was smooth on auto tile rendering GTX 750 Ti.

8. My POV is based from 1st party Sony. 1st party Sony >>>>>>>>> 3rd party Unity3D modder.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#107  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

1. Since current games already used lesser datatypes like Fp16 and integers, any additional math operation and data element processing will additional memory bandwidth.

32 bit processing occurs on the chip. AMD GCN version 1.1 TMU and ROPS units still has 16bit datatype read and write modes.

Tegra X1's issues has nothing to do with with AMD GCN version 1.1.

Sony 1st party dev >>>>>>>>>>>>>>>>> YOU. You have misapplied TX1's issues for AMD GCN version 1.1.

Scorpio vs R9-390X

R9-390X has 1 MB L2 cache which is less than Scorpio's 2 MB L2 cache. Scorpio can hold more data on chip when compared to R9-390X.

GM200 and GP104 has 2 MB L2 caches. GP102 has 3 MB L2 cache.

For games like Forza Horzion 3, this is important tiled forward plus rendering 3D engines.

R9-290/R9-290X has 320 GBps physical memory bandwidth with 263 GBps effective memory bandwidth, hence R9-390X's 384 GBps would be around 315.6GBps.

R9-390X's 384 GBps doesn't have DCC, hence it's effective memory bandwidth is about 315 GBps which is less than Scorpio's 326 physical memory bandwidth and effective memory bandwidth 336 GB/s with compression.

R9-390X doesn't have DCC to restore the lost memory bandwidth inefficiencies.

Scorpio's Fozra 6 wet track results >>>>>>>>>>>>>>> R9-390X.

I run my GTX 1080 Ti down to 6.4 TFLOPS while keeping memory bandwidth as is, the result is still sustained 4K/60 fps. So, Fack you.

Show me Hawaii XT GPU that delivers 1:1 physical vs effective memory bandwidth benchmark! Prove It.

RX-480 (5.83 TFLOPS)'s 264 GB/s effective bandwidth, hence very close to R9-290 (4.8 TFLOPS) and R9-290X (5.6 TFLOPS) results.

3. Shared memory PS4 vs Non-shared memory R7-265

http://www.eurogamer.net/articles/digitalfoundry-2016-we-built-a-pc-with-playstation-neo-gpu-tech

For Witcher 3, SW battlefront and SF5, there's very minimal difference between shared memory PS4 vs non-shared memory R7-265.

Normal PC doesn't have Fusion links.

@tormentos said:

You are a dishonest poster who hyped FP16 and claim Scorpio exceed it a 1080GTX,now you down play it because you were wrong and scorpio doesn't have you are a JOKE LEMMING.

Polaris has DCC and pack subword Fp16 math.

http://gpuopen.com/using-sub-dword-addressing-on-amd-gpus-with-rocm/

With up to 65536 numbers per 16bit datatype, the data throughput has doubled but the the number of math operation remained the the same.

DirectX12 Shader Model 5.1 hasn't enabled the above feature.

135 GBps effective memory bandwidth for PS4 which is 76 percent efficient which is inline with PC GCN GPUs.

Try again Sony ass kisser

1. This is all i will address from your shitty hypocrite post.

2. The PS4 uses unified ram just like scorpio so if the PS4 has 76% memory efficiency so does Scorpio so downplaying the R390X 384GB/s using effective memory means that also apply to scorpio lemming,and this is why i call you a HYPOCRITE you want to pretend 100% of scorpio bandwidth is for GPU and pretend is it 100% effective as well BOTH totally wrong.

So the PS4 loss 24% of its bandwidth?

Lets apply that loss to Scorpio as well.

326GB/s - 24% = 247Gb/s.. < this is scorpio bandwidth after applying the same loss on PS4.

But hey lets pretend scorpio has 85% effective memory which i am sure it DOESN'T.

326GB/s - 15% = 277GB/s

Even if we pretend scorpio has 85% effective bandwidth (which it doesn't) still is far lower than the 315Gb/s you claim (pulled from your ass) that the R390X has.

And i will end this post showing how Skewed and biased your opinions are in favor of MS to the point where you invent pitfalls for AMD GPU in order to downplay them vs Scorpio and how you simple chose to ignore conveniently what you know is true.

R9-390X's 384 GBps doesn't have DCC, hence it's effective memory bandwidth is about 315 GBps which is less than Scorpio's 326 physical memory bandwidth and effective memory bandwidth 336 GB/s with compression.

This ^^ is you on this very post i am quoting,let me highly the wrong and stupid part about this ^^ post which prove without shadow of a doubt that you are a MS suck up.

1. This is all i will address from your shitty hypocrite post.

2. R9-390X doesn't have Polaris DCC to restore memory bandwidth from inefficiencies.

I have applied Polaris's memory bandwidth inefficiencies on Scorpio and I added Polaris DCC booster. I treated both Scorpio and PS4 Pro like Polaris type GPU.

Where's the Polaris DCC booster? Both PS4 Pro and Scorpio are not GCN 1.1

Polaris DCC booster enabled RX-480's result to land on R9-290's result.

R9-390X's 30 fps result vs RX-480's 25 fps result = 1.20X difference

315 Gbps vs 264 Gbps = 1.19X difference

315 GBps was based from R9-290X's efficiency i.e. 320 Gbps --> 263 Gbps. That's 82 percent efficient.

Notice the frame rate difference and effective memory bandwidth is similar.

980 Ti's 34 fps result vs RX-480's 25 fps result = 1.36X difference

364 Gbps vs 264 Gbps = v1.38X difference

Notice the frame rate difference and effective memory bandwidth is similar. Beyond3D's TMU benchmark is pretty accurate for AMD and NVIDIA GPU rankings.

When you scale effective bandwidth difference between RX-480 (264 GBps) and Scorpio (336 GBps) which yields 1.27X difference against RX-580's 28 fps result, it lands on GTX 1070 /Fury X range i.e. 35 fps range.

YOU ARE A DISHONEST POSTER.

@tormentos said:

Even if we pretend scorpio has 85% effective bandwidth (which it doesn't) still is far lower than the 315Gb/s you claim (pulled from your ass) that the R390X has.

And i will end this post showing how Skewed and biased your opinions are in favor of MS to the point where you invent pitfalls for AMD GPU in order to downplay them vs Scorpio and how you simple chose to ignore conveniently what you know is true.

Where's the Polaris DCC booster? Both PS4 Pro and Scorpio are not GCN 1.1

315 GBps was based from R9-290X's efficiency i.e. 320 Gbps --> 263 Gbps. That's 82 percent efficient. Where did you get 85 percent efficiency?

R9-390X is just an overclock version of R9-290X.

My old MSI R9-290X factory OC was firmware mod'ed into R9-390X. LOL. The card manufacturer already filtered the best Hawaii XT quality for factory OC. My point: R9-390X is not a new chip design when compared to R9-290X.

@tormentos said:

1-Here ^^ you claim the R390X has 315GB/s because it doesn't have DCC,not having DCC doesn't decrease your bandwidth DCC help compress data and saves bandwidth which is different but now having it will not magically reduce your bandwidth.

Wrong, DCC restores memory bandwidth lost to memory sub-system inefficiencies.

Your "not having DCC doesn't decrease your bandwidth" assertion is shit and I didn't claim this bullshit. YOU ARE A DISHONEST POSTER.

@tormentos said:

2-Look how you use effective memory on the R390X to claim it drops from 384GB/s to 315GB/s but for scorpio you claim 326GB/s + DCC =336GB/s there is a problem with this << Scorpio effective bandwidth isn't 100% like the PS4 it has what 76% effective memory?

Scorpio's effective memory bandwidth

Step 1: 326 Gbps x Polaris 76 percent memory efficiency = 247.76 Gbps <---------- effective physical memory bandwidth before Polaris DCC.

Step 2: 247.76 Gbps x Polaris DCC boost 1.36X = 336.96 Gbps. <--------- Polaris DCC value added over GCN 1.1

Polaris DCC has boost factor 1.36X.

YOU ARE A DISHONEST POSTER.

@tormentos said:

3-YOU ARE COMPARING ONCE AGAIN 315GB/S OF THE R390X WHICH IS SOLELY GPU MEMORY NOT SHARED VS SCORPIO 320GB/S WHICH IT SHAREDDDDDDDDDDD..

Witcher 3, Battlefront and SF5 has PS4 ~= R7-265 results.

For Witcher 3, Battlefront and SF, there's very little difference between shared memory PS4 vs non-shared memory R7-265.

Normal PC with dGPU doesn't have Fusion links. Fusion link added for a purpose.

Normal PC with dGPU doesn't have zero copy.

@tormentos said:

DO YOU KNOW WHAT THE WORD SHARED MEANS? SCORPIO DOESN'T FREAKING HAS 320GBS IT CAN'T BECAUSE THE DAMN FREAKING CPU AND SYSTEM NEED BANDWIDTH TO OPERATE ON XBOX ONE 30GB/S ARE RESERVE FALLOWING THAT SAME LINE SCORPIO ONLY HAS 296GB/S AND THAT IS WITHOUT TAKING INTO ACCOUNT EFFECTIVE MEMORY BANDWIDTH LOSS..

30 GB/s is not reserve CPU bandwidth you stupid clown. Modern X86 PC with IGP has exercised dynamic memory bandwidth allocation for years.

Normal PC with dGPU doesn't have Fusion links.

Normal PC with dGPU doesn't have zero copy.

30 GB/s worth of X86 math results would kicking my old Intel Core i5-2500K's 128 bit DDR3-1600 setup.

When the CPU is calculating AI and physics, GPU is usually in idle state i.e. waiting for GPU command list.

You failed with producer (CPU) and consumer (GPU) relationship concept. GPU is not symmetric processing with the CPU.

You failed with basic graphics pipeline concept i.e. like a car assembly pipeline.

---

Stage 0. New frame divider.

Stage 1 (Producer). AI and Physics calculation on CPU. Provides object positioning. Can be multi-threaded. The math problem can be tiled within the CPU's cache to reduce external memory access rates.

Stage 2 (Producer). GPU command-list generation on CPU. Provides the GPU the rendering commands i.e. what to draw, what to shade and 'etc'. Can be multi-threaded. Can be transferred to GPU via Fusion links.

Stage 3 (Consumer). GPU renders the view port based from the GPU command list. For Step 1 and 2, GPU is usually in idle mode. This is where the GPU memory bandwidth is very important. The math problem can be tiled within the GPU's cache.

Stage 4. Test user input, Loop back to Step 0.

---

Stage from 1 to 4 occurs within 16 ms or 33 ms in the ideal target frame time render. This is the typical sync compute/graphics rendering model.

As you can see, Step 1 and 2 has opportunities for Async compute when GPU is usually sits idle.

The game console is not like Windows PC with bloated non-game background tasks.

YOU ARE A DISHONEST POSTER. Your error is to treat GPU as a symmetric processor with the CPU when the GPU is a salve processor to the CPU.

Disproportional memory bandwidth wastage occurs when you have heavy async compute (i.e. concurrent CPU and GPU operations) and excessive CPU memory access, but

  • Scorpio/PS4 Pro GPUs has a larger 2 MB L2 cache which is 4X larger than Radeon HD 7870 has 512 KB L2 cache. Larger L2 cache keeps more data on the GPU.
  • Scorpio/PS4 Pro APUs are equipped with Fusion link which reduces CPU's memory access rates.
  • XBO is not big on Async compute i.e. it's main focus is on custom D3D12 command processor which moves more CPU API logic into the GPU.

Using async compute is for using idle GPU resources within CPU producer stages. Using tiling to L2 cache programming model reduces external memory access rates. PS4 is big with async compute.

Shared memory PS4 vs Non-shared memory R7-265

http://www.eurogamer.net/articles/digitalfoundry-2016-we-built-a-pc-with-playstation-neo-gpu-tech

We have a Sapphire R7 265 in hand, running at 925MHz - down-clock that to 900MHz and we have a lock with PS4's 1.84 teraflops of compute power.

In our Face-Offs, we like to get as close a lock as we can between PC quality presets and their console equivalents in order to find the quality sweet spots chosen by the developers. Initially using Star Wars Battlefront, The Witcher 3 and Street Fighter 5 as comparison points with as close to locked settings as we could muster, we were happy with the performance of our 'target PS4' system. The Witcher 3 sustains 1080p30, Battlefront hits 900p60, SF5 runs at 1080p60 with just a hint of slowdown on the replays - just like PS4. We have a ballpark match

...

at straight 1080p on the R7 265-powered PS4 surrogate. Medium settings is a direct match for the PS4 version here and not surprisingly, our base-level PS4 hardware runs it very closely to the console we're seeking to mimic

...

Take the Witcher 3, for example. Our Novigrad City test run hits a 33.3fps average on the R7 265 PS4 target hardware - pretty much in line with console performance.

For Witcher 3, SW Battlefront and SF5, there's very minimal difference between shared memory PS4 vs non-shared memory R7-265.

Moving more CPU rendering logic to the GPU. Reducing CPU's memory access rates.

The key concept is when GPU's main sync compute/graphics render is active within the 16 ms/33 ms render time, the CPU stays within it's L2 cache and defers external memory access until the next frame start.

YOU ARE A DISHONEST POSTER.

@tormentos said:

YOU ARE A DISHONEST POSTER.

YOU ARE A DISHONEST POSTER.

@tormentos said:

Scorpio will always be more powerful than the Pro but your hyping and constant on purpose dishonest arguments just to help scorpio are a joke.

Until you show me R9-390X delivering Forza 6 nurburgring wet track at 4K/ 60 fps without frame rate drop, then your argument is DISHONEST.

Avatar image for deactivated-5a30e101a977c
deactivated-5a30e101a977c

5970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#108 deactivated-5a30e101a977c
Member since 2006 • 5970 Posts

@tormentos said:
@FastRobby said:
@tormentos said:

2-Oh the PS4 is 135Gb/s?

Hahahahahaa Owned....

That's purely theoretical, it never achieves that... Which you know, because I know you've seen the presentation from Sony on this topic with this slide

No the theoretical one is 176GB/s not 172...lol

That was sony showing how the CPU could eat bandwidth in a disproportional way,there is a part where it use 0 bandwidth for CPU which clearly is an example since NO DEVICE WORKS WITH 0 BANDWIDTH.

Indeed, no device works with 0 bandwidth so it really never achieves that theoretical bandwidth number you are spouting here. The only one owned here is you.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#109  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos:

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360, which instead of doing a 16-bit float per component 64bpp render target, you can do the equivalent with us using 32 bits - so we did a lot of focus on really maximising efficiency and utilisation of that ESRAM."

XBO's GPU has additional features over other GCNs e.g. 64 bits (FP16 bits per component) ---> compressed into 32 bits.

XBO's GCN has 10 bit FP support under DX12! LOL.

When XBO's compressed data format customizations was combined with 6 TFLOPS brute force, it would beat PC RX-480 and R9-390X.

GTX 1070 effectively runs into memory bandwidth wall when Forza 6's wet track is used and Scorpio has changed the rules with customized compressed data formats.

Note that XBO's customized compressed data formats will not solved ALU shader bound issues, but Scorpio's 6 TFLOPS will fix this issue.

Avatar image for DragonfireXZ95
DragonfireXZ95

26649

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#110 DragonfireXZ95
Member since 2005 • 26649 Posts

All the resolution upgrades in the world couldn't make this game look good.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#111  Edited By tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

@tormentos:

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360, which instead of doing a 16-bit float per component 64bpp render target, you can do the equivalent with us using 32 bits - so we did a lot of focus on really maximising efficiency and utilisation of that ESRAM."

XBO's GPU has additional features over other GCNs e.g. 64 bits (FP16 bits per component) ---> compressed into 32 bits.

XBO's GCN has 10 bit FP support under DX12! LOL.

When XBO's compressed data format customizations was combined with 6 TFLOPS brute force, it would beat PC RX-480 and R9-390X.

GTX 1070 effectively runs into memory bandwidth wall when Forza 6's wet track is used and Scorpio has changed the rules with customized compressed data formats.

Note that XBO's customized compressed data formats will not solved ALU shader bound issues, but Scorpio's 6 TFLOPS will fix this issue.

On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360,

In other word OLD SHIT done on xbox 360 use again....

Much like the command processor modification which they did on xbox 360,xbox one and scorpio is the same shit.

So again Scorpio doesn't have FP16 double pumped and need extra resources to do what the PS4 Pro can do with less.. This is a FACT not my opinion as that feature is not the same and considering it is on xbox one which TOTALLY failed vs the PS4 power wise even more,trying to imply that would work like FP16 is a joke if that was the case the XBO would match the PS4 using it..hahhaaa

Forza wet track mean shit even Drive Club does wet track an is on a damn 1.86TF GPU,Forza is not a BENCHMARK for games is a shitty looking unimpressive game that was 1080p 60FPS since day 1 on xbox one when other more demanding games could not even hit 900p.

@FastRobby said:

Indeed, no device works with 0 bandwidth so it really never achieves that theoretical bandwidth number you are spouting here. The only one owned here is you.

If you don't know what you are arguing please stay away,exactly no device work with zero bandwidth,which mean the more the PS4 use CPU the less it can use for GPU,which apply to scorpio as it uses the same unified memory structure from exactly the same vendor..lol

So yeah you are clueless so is he who try to argue 135GB/s..lol

@ronvalencia said:

1. This is all i will address from your shitty hypocrite post.

2. R9-390X doesn't have Polaris DCC to restore memory bandwidth from inefficiencies.

Scorpio's effective memory bandwidth

Step 1: 326 Gbps x Polaris 76 percent memory efficiency = 247.76 Gbps <---------- effective physical memory bandwidth before Polaris DCC.

Step 2: 247.76 Gbps x Polaris DCC boost 1.36X = 336.96 Gbps. <--------- Polaris DCC value added over GCN 1.1

Polaris DCC has boost factor 1.36X.

YOU ARE A DISHONEST POSTER.

My god you simple will not admit being wrong and will continue to spew the same bullshit over and over..

Come back to me when Scorpio can use 100% of its bandwidth for GPU like the R390X can.

Avatar image for deactivated-5a30e101a977c
deactivated-5a30e101a977c

5970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#112 deactivated-5a30e101a977c
Member since 2006 • 5970 Posts

@tormentos said:

@FastRobby said:

Indeed, no device works with 0 bandwidth so it really never achieves that theoretical bandwidth number you are spouting here. The only one owned here is you.

If you don't know what you are arguing please stay away,exactly no device work with zero bandwidth,which mean the more the PS4 use CPU the less it can use for GPU,which apply to scorpio as it uses the same unified memory structure from exactly the same vendor..lol

You are completely correct in this post, thing is that you weren't saying it like this before. Scorpio > PS Pro by 3x the gap there was between xbox one and ps4. Difference is huge

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#113  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

@tormentos:

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360, which instead of doing a 16-bit float per component 64bpp render target, you can do the equivalent with us using 32 bits - so we did a lot of focus on really maximising efficiency and utilisation of that ESRAM."

XBO's GPU has additional features over other GCNs e.g. 64 bits (FP16 bits per component) ---> compressed into 32 bits.

XBO's GCN has 10 bit FP support under DX12! LOL.

When XBO's compressed data format customizations was combined with 6 TFLOPS brute force, it would beat PC RX-480 and R9-390X.

GTX 1070 effectively runs into memory bandwidth wall when Forza 6's wet track is used and Scorpio has changed the rules with customized compressed data formats.

Note that XBO's customized compressed data formats will not solved ALU shader bound issues, but Scorpio's 6 TFLOPS will fix this issue.

On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360,

In other word OLD SHIT done on xbox 360 use again....

Much like the command processor modification which they did on xbox 360,xbox one and scorpio is the same shit.

So again Scorpio doesn't have FP16 double pumped and need extra resources to do what the PS4 Pro can do with less.. This is a FACT not my opinion as that feature is not the same and considering it is on xbox one which TOTALLY failed vs the PS4 power wise even more,trying to imply that would work like FP16 is a joke if that was the case the XBO would match the PS4 using it..hahhaaa

Forza wet track mean shit even Drive Club does wet track an is on a damn 1.86TF GPU,Forza is not a BENCHMARK for games is a shitty looking unimpressive game that was 1080p 60FPS since day 1 on xbox one when other more demanding games could not even hit 900p.

@FastRobby said:

Indeed, no device works with 0 bandwidth so it really never achieves that theoretical bandwidth number you are spouting here. The only one owned here is you.

If you don't know what you are arguing please stay away,exactly no device work with zero bandwidth,which mean the more the PS4 use CPU the less it can use for GPU,which apply to scorpio as it uses the same unified memory structure from exactly the same vendor..lol

So yeah you are clueless so is he who try to argue 135GB/s..lol

@ronvalencia said:

1. This is all i will address from your shitty hypocrite post.

2. R9-390X doesn't have Polaris DCC to restore memory bandwidth from inefficiencies.

Scorpio's effective memory bandwidth

Step 1: 326 Gbps x Polaris 76 percent memory efficiency = 247.76 Gbps <---------- effective physical memory bandwidth before Polaris DCC.

Step 2: 247.76 Gbps x Polaris DCC boost 1.36X = 336.96 Gbps. <--------- Polaris DCC value added over GCN 1.1

Polaris DCC has boost factor 1.36X.

YOU ARE A DISHONEST POSTER.

My god you simple will not admit being wrong and will continue to spew the same bullshit over and over..

Come back to me when Scorpio can use 100% of its bandwidth for GPU like the R390X can.

Too bad for you, RX-480's DCC + 256 GB/s physical memory bandwidth was able to rival R9-290/R9-290X's 320 GB/s physical memory bandwidth and results.

The physical memory bandwidth gap between RX-480 and R9-290X = 64 GB/s.

The physical memory bandwidth gap between Scorpio and R9-390X = 58 GB/s.

Witcher 3, Battlefront and SF5 has shown very little difference between non-shared R7-265 and shared memory PS4. Deal with it.

The most important memory bandwidth interval is after CPU's AI and physics calculations and before the new frame interval. Even PC's discrete GPUs doesn't fully use their VRAM memory bandwidth AND PCI-E version 3 16X transfer bandwidth has to be factored in.

Game console has very little background processes hence very little memory bandwidth context switch thrashing.

The memory bandwidth after CPU's AI and physics calculations and before the new frame interval scales with effective memory bandwidth increase.

My memory bandwidth arguments refers to the memory bandwidth after CPU's AI and physics calculations and before the new frame interval.

Furthermore, custom Direct3D12 command processor that reduces CPU instruction calls for a particular API function reduces CPU usage.

GPU usually sits in idle state during CPU's AI and physics calculations since object locations has to be determine before any GPU view port render.

You failed to grasp the concepts for raster graphics pipeline, producer and consumer processing model. You are wrong.

Disproportionate memory bandwidth inefficiencies with CPU/GPU interactions are due to memory context switch overheads.

Tricks to mitigate this issue as follows

1. The use of hardware buffers i.e. hold the new context data transfers in a buffer until memory context has switched. Both PS4 Pro and Scorpio GPUs has larger 2 MB L2 cache.

2. Move more CPU rendering logic to the GPU e.g. Scorpio has programmable custom Direct3D12 command processor that bakes in Direct3D12 function calls.

3. Use submission batch.

4. Respect CPU's L2 cache boundaries.

5. Use Fusion link as much as possible.

@tormentos said:

1. In other word OLD SHIT done on xbox 360 use again....

2. Much like the command processor modification which they did on xbox 360,xbox one and scorpio is the same shit.

3. So again Scorpio doesn't have FP16 double pumped and need extra resources to do what the PS4 Pro can do with less.. This is a FACT not my opinion as that feature is not the same and considering it is on xbox one which TOTALLY failed vs the PS4 power wise even more,

4. trying to imply that would work like FP16 is a joke if that was the case the XBO would match the PS4 using it..hahhaaa

5. Forza wet track mean shit even Drive Club does wet track an is on a damn 1.86TF GPU,Forza is not a BENCHMARK for games is a shitty looking unimpressive game that was 1080p 60FPS since day 1 on xbox one when other more demanding games could not even hit 900p.

1. Old shit doesn't automatically equals bad. X86 instruction set has disadvantage and advantages, and X86-64 has filtered out most of the X86 disadvantages i.e. clean up the old instruction set. X86-64 has kept it's instruction compression/code density advantage.

2. They are not the same shit. Xbox 360's version scales for it's GPU power.

3. Your "Scorpio doesn't have FP16 double pumped and need extra resources to do what the PS4 Pro can do with less" is not 100 percent correct

  • Game console programmers already using lesser datatypes such as Fp16 as their input and output formats e.g. Killzone Shadow Fall.
  • XBO has a custom compression render target format that turns 64 bits (quad packed Fp16) into 32 bits. That's a 2:1 compression ratio. DCC is a small block compression which is different to word size compression.
  • Polaris has double rate data processing (sub-word math processing) without double rate math operation. This is platform specific optimization and Scorpio's Forza 6 demo didn't use platform specific optimization. Your argument with this issue is a red herring.

Your argument doesn't factor in game console programmers already using lesser datatypes such as Fp16 as their input and output formats.

When lesser datatypes are already in used for transferring data to and from the GPU, double rate Fp16 feature is good when there's a matching memory bandwidth increase.

To bad for you, Vega 10 has high bandwidth cache that acts like 3rd level cache while GP100 has 2X the physical memory bandwidth over GP102.

My GTX 980 Ti beaten PS4 Pro game with double rate Fp16 feature was mostly due to generation ahead of Polaris DCC and physical 337 GB/s memory bandwidth advantage.

4. Your statement shows you don't understand the impact for XBO has a custom compression render target format i.e. it doesn't solve ALU bound issues you stupid cow. LOL

You failed to grasp the concept for basic graphics pipeline. A render target memory write operation requires the math operation to be completed in a timely manner! hahahahaha

5. PS4's Drive Club resolution is not 4K you stupid cow. Scorpio is designed to make XBO's 1600x900p which imply PS4's 1920x1080p and XBO's 1920x1080p into 4K.

Avatar image for SUD123456
SUD123456

6950

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#114 SUD123456
Member since 2007 • 6950 Posts

@DragonfireXZ95 said:

All the resolution upgrades in the world couldn't make this game look good.

Best thing about this game is that none of that matters. Indeed, X360 players have a slight advantage over XOne players because they have less fake foliage and shorter and less grass which makes targeting easier especially when ridge fighting. This is the same reason that some PC players turn off the extra visuals.

It's funny that the two Muppets above arguing about this in the context of this game are completely clueless about what is really important to this game.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#115 ronvalencia
Member since 2008 • 29612 Posts

@SUD123456 said:
@DragonfireXZ95 said:

All the resolution upgrades in the world couldn't make this game look good.

Best thing about this game is that none of that matters. Indeed, X360 players have a slight advantage over XOne players because they have less fake foliage and shorter and less grass which makes targeting easier especially when ridge fighting. This is the same reason that some PC players turn off the extra visuals.

It's funny that the two Muppets above arguing about this in the context of this game are completely clueless about what is really important to this game.

My main point for this topic is the reverse resolution gate.