The Real Power of the X1

This topic is locked from further discussion.

#301 Posted by tormentos (17311 posts) -

The irrelevancy is your post. Your have NOT addressed the computation performance in relation to CU count and memory bandwidth. There's no limits to your ignorance.

Your simpleton mind is limited to some marketing codenames or model numbers.

X1 doesn't have 7790's 96 GB/s memory bandwidth i.e. X1's memory bandwidth has a theoretical 68 GB/s (four 64bit controllers) and 204 GB/s ESRAM (four 256bit controllers with full duplex).

You have NOT addressed my points on why 7790 has issues with faster memory.

I have shown you 7790 > 7850 on pure CU bound workloads, hence it has the same potential as any other GCN but within the limits of it's CU count.

The prototype 7850 has 12 WORKING CUs and theoretical 153.6 GB/s memory bandwidth. Disabled CUs plays no part with computation performance.

X1's GCN is not gimped by 7770's 1 billion triangles per second and 72 GB/s memory bandwidth bottlenecks.

Hahahaa...

1.18 TF vs 1.84 TF.

172gb/s PS4

140Gb/s to 150 GB/s xbox one.

10% GPU reservation for Kinect and snap,nothing like that on PS4.

Killzone 1080p 60 FPS online Ryse 900p 30 FPS still Killzone SF looks better.

BF4 say to be 900p on PS4 mid to high setting.

BF4 on xbox one 720p low to mid setting.

The full 7790 is 20% slower than the 7850 and that is the 1.79TF one,imagine a 1.18TF one.

MS stated 14CU not 12,14 period unless you have a link to back you up you are just making redundant sh** out of damage controls..hahah

LInk not your opinion i want a link.

#302 Posted by Deevoshun (855 posts) -

Interesting stuff, the Xboxs guys are actually supporting their claims via links and other good information. Good reads for me while here at work.

Sony fans you should support your claims at least try so that they seem somewhat logical (tormentos). It does seem like MS put forth a more architecturally advanced system in the XB1. Trying to compare PS4's off the shelf parts to XB1's is plain wrong, pulling out Gcards and claiming this is what the system is because they have SOME similarities is faulty.

Also as another poster pointed out, with all this hardware the only way to push it is with powerful software an area MS has excelled at with their DirectX tech. OpenGL has fallen behind DirectX and Mantle is NOT for the consoles. I ran into this issue with BF4. Battlefield 4 ran better on Windows 8.

http://www.gameranx.com/updates/id/16390/article/bf4-will-run-better-on-windows-8-than-windows-7/

http://www.bf4blog.com/battlefield-4-on-windows-8-will-have-better-cpu-optimization/

#303 Edited by ronvalencia (15109 posts) -

@

@ronvalencia said:

The irrelevancy is your post. Your have NOT addressed the computation performance in relation to CU count and memory bandwidth. There's no limits to your ignorance.

Your simpleton mind is limited to some marketing codenames or model numbers.

X1 doesn't have 7790's 96 GB/s memory bandwidth i.e. X1's memory bandwidth has a theoretical 68 GB/s (four 64bit controllers) and 204 GB/s ESRAM (four 256bit controllers with full duplex).

You have NOT addressed my points on why 7790 has issues with faster memory.

I have shown you 7790 > 7850 on pure CU bound workloads, hence it has the same potential as any other GCN but within the limits of it's CU count.

The prototype 7850 has 12 WORKING CUs and theoretical 153.6 GB/s memory bandwidth. Disabled CUs plays no part with computation performance.

X1's GCN is not gimped by 7770's 1 billion triangles per second and 72 GB/s memory bandwidth bottlenecks.

Hahahaa...

1.18 TF vs 1.84 TF.

172gb/s PS4

140Gb/s to 150 GB/s xbox one.

10% GPU reservation for Kinect and snap,nothing like that on PS4.

Killzone 1080p 60 FPS online Ryse 900p 30 FPS still Killzone SF looks better.

BF4 say to be 900p on PS4 mid to high setting.

BF4 on xbox one 720p low to mid setting.

The full 7790 is 20% slower than the 7850 and that is the 1.79TF one,imagine a 1.18TF one.

MS stated 14CU not 12,14 period unless you have a link to back you up you are just making redundant sh** out of damage controls..hahah

LInk not your opinion i want a link.

Your the one making redundant sh** out of damage control. Your mind can't get away from simple marketing model numbers and code names.

As MS also mentioned, 10 precent reservation is only temporary LOL.

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful. DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

Gilray stated that, It means we dont have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

7870 XT's memory operates around 192 GB/s LOL. So what...

---

X1 has 12 CUs at 853Mhz with real 55 GB/s DDR3 + 150 GB/s ESRAM memory bandwidth.

Prototype 7850 has 12 CUs at 860Mhz with theoretical 153.6 GB/s memory bandwidth.

---

7790 (96 GB/s theoretical) has less memory bandwidth than 7850 (153.6 GB/s theoretical) LOL. Only a fool can't see that.

#304 Edited by tormentos (17311 posts) -

@

Your the one making redundant sh** out of damage control. Your mind can't get away from simple marketing model numbers and code names.

As MS also mentioned, 10 precent reservation is only temporary LOL.

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful. DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

Gilray stated that, It means we dont have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

7870 XT's memory operates around 192 GB/s LOL. So what...

---

X1 has 12 CUs at 853Mhz with real 55 GB/s DDR3 + 150 GB/s ESRAM memory bandwidth.

Prototype 7850 has 12 CUs at 860Mhz with theoretical 153.6 GB/s memory bandwidth.

---

7790 (96 GB/s theoretical) has less memory bandwidth than 7850 (153.6 GB/s theoretical) LOL. Only a fool can't see that.

lol

From what i can see you can't many anything for your self,even my reply you copy and pasted them...hahaha

No MS say that latter they could free something,how much who knows fact is they could be bluffing for all we know,those reservations were done so that the system did not suffer while multitasking running snap of Kinect.

As of now is 10% period.

The xbox one doesn't have a 7870XT so is totally irrelevant to the topic at hand,but this is not new when you see your self pin in an argument without anything to reply you run and hide on PC,like if that changed anything the xbox one doesn't have a 7870 XT..haha

The xbox one has 14 CU from those 2 are disable because MS disable them to get better redundancy,they even think about using them,so yeah it is a 7790,with 2 CU off and lower clock speed.

Yeah that number is also cut back to 140Gb/s and i am sure those are the very best case scenarios,i know it could be lower.

The xbox one 1.18Tf is 600Gflops less than a 7790 only a trueeeeeeeeee fooooollll can't see that..hahahah

#305 Posted by FoxbatAlpha (6191 posts) -

@ronvalencia said:

The irrelevancy is your post. Your have NOT addressed the computation performance in relation to CU count and memory bandwidth. There's no limits to your ignorance.

Your simpleton mind is limited to some marketing codenames or model numbers.

X1 doesn't have 7790's 96 GB/s memory bandwidth i.e. X1's memory bandwidth has a theoretical 68 GB/s (four 64bit controllers) and 204 GB/s ESRAM (four 256bit controllers with full duplex).

You have NOT addressed my points on why 7790 has issues with faster memory.

I have shown you 7790 > 7850 on pure CU bound workloads, hence it has the same potential as any other GCN but within the limits of it's CU count.

The prototype 7850 has 12 WORKING CUs and theoretical 153.6 GB/s memory bandwidth. Disabled CUs plays no part with computation performance.

X1's GCN is not gimped by 7770's 1 billion triangles per second and 72 GB/s memory bandwidth bottlenecks.

Hahahaa...

1.18 TF vs 1.84 TF.

172gb/s PS4

140Gb/s to 150 GB/s xbox one.

10% GPU reservation for Kinect and snap,nothing like that on PS4.

Killzone 1080p 60 FPS online Ryse 900p 30 FPS still Killzone SF looks better.

BF4 say to be 900p on PS4 mid to high setting.

BF4 on xbox one 720p low to mid setting.

The full 7790 is 20% slower than the 7850 and that is the 1.79TF one,imagine a 1.18TF one.

MS stated 14CU not 12,14 period unless you have a link to back you up you are just making redundant sh** out of damage controls..hahah

LInk not your opinion i want a link.

I know you don't need a link because everything is wrong.

#306 Edited by delta3074 (17920 posts) -

When most multiplatform games ran at lower resolutions on the ps3 it did not matter 'looks the same to me' now according to cows resolution is the deciding factor whe choosing which game has the best graphics, cows must be dizzy from all those U-turns.

#307 Edited by ronvalencia (15109 posts) -

@

@tormentos said:

@ronvalencia said:

@

Your the one making redundant sh** out of damage control. Your mind can't get away from simple marketing model numbers and code names.

As MS also mentioned, 10 precent reservation is only temporary LOL.

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful. DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

Gilray stated that, It means we dont have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

7870 XT's memory operates around 192 GB/s LOL. So what...

---

X1 has 12 CUs at 853Mhz with real 55 GB/s DDR3 + 150 GB/s ESRAM memory bandwidth.

Prototype 7850 has 12 CUs at 860Mhz with theoretical 153.6 GB/s memory bandwidth.

---

7790 (96 GB/s theoretical) has less memory bandwidth than 7850 (153.6 GB/s theoretical) LOL. Only a fool can't see that.

lol

From what i can see you can't many anything for your self,even my reply you copy and pasted them...hahaha

No MS say that latter they could free something,how much who knows fact is they could be bluffing for all we know,those reservations were done so that the system did not suffer while multitasking running snap of Kinect.

As of now is 10% period.

The xbox one doesn't have a 7870XT so is totally irrelevant to the topic at hand,but this is not new when you see your self pin in an argument without anything to reply you run and hide on PC,like if that changed anything the xbox one doesn't have a 7870 XT..haha

The xbox one has 14 CU from those 2 are disable because MS disable them to get better redundancy,they even think about using them,so yeah it is a 7790,with 2 CU off and lower clock speed.

Yeah that number is also cut back to 140Gb/s and i am sure those are the very best case scenarios,i know it could be lower.

The xbox one 1.18Tf is 600Gflops less than a 7790 only a trueeeeeeeeee fooooollll can't see that..hahahah

1. "Copying and pasting" from SDK examples or source documentation or other materials are part the job LOL. It would be waste of time to hack a GPU driver without documentation.

2. No, MS haven't implemented the two concurrent render pipes for game and system services, hence relying on 10 precent time slicing reservation method.

http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"Xbox One has a conservative 10 per cent time-sliced reservation on the GPU for system processing. This is used both for the GPGPU processing for Kinect and for the rendering of concurrent system content such as snap mode," Microsoft technical fellow Andrew Goossen told us.

"The current reservation provides strong isolation between the title and the system and simplifies game development - strong isolation means that the system workloads, which are variable, won't perturb the performance of the game rendering. In the future, we plan to open up more options to developers to access this GPU reservation time while maintaining full system functionality."

Once you get over the initial surprise that the background system takes up quite so much GPU time in the first place, the notion of being able to give developers access to this resource while not compromising functionality may sound rather like having your cake and eating it, but Microsoft points to particular aspects of the GPU hardware that make this scenario possible.

"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."

With dual rendering command pipes, X1's GPU should be able to give back the unused GPU power to devs when system is not using Kinect and GUI snap features.

-------------

3. The context for 7870 XT's 192 GB/s memory bandwidth has nothing to do with X1 and only deals with your "memory operates at X" interpretation.

4. http://semiaccurate.com/2013/08/30/a-deep-dive-in-to-microsofts-xbox-one-gpu-and-on-die-memory/

Microsoft sources SemiAccurate talked to say real world code, the early stuff that is out there anyway, is in the 140-150GBps range, about what you would expec.

204 GB/s is for ESRAM's theoretical bandwidth.

5. 7790's PC benchmarks would NOT reflect X1's results since there are big differences with memory bandwidth period.

Your 7790 selection doesn't come close to replicate 12 WORKING CUs at 853Mhz with real 55GB/s DDR3 + 150 GB/s ESRAM bandwidth.

My view on the closeness between the two boxes are well supported by multiple statements from devs (with their name).

#308 Posted by tormentos (17311 posts) -

@

1. "Copying and pasting" from SDK examples or source documentation or other materials are part the job LOL. It would be waste of time to hack a GPU driver without documentation.

2. No, MS haven't implemented the two concurrent render pipes for game and system services, hence relying on 10 precent time slicing reservation method.

http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"Xbox One has a conservative 10 per cent time-sliced reservation on the GPU for system processing. This is used both for the GPGPU processing for Kinect and for the rendering of concurrent system content such as snap mode," Microsoft technical fellow Andrew Goossen told us.

"The current reservation provides strong isolation between the title and the system and simplifies game development - strong isolation means that the system workloads, which are variable, won't perturb the performance of the game rendering. In the future, we plan to open up more options to developers to access this GPU reservation time while maintaining full system functionality."

Once you get over the initial surprise that the background system takes up quite so much GPU time in the first place, the notion of being able to give developers access to this resource while not compromising functionality may sound rather like having your cake and eating it, but Microsoft points to particular aspects of the GPU hardware that make this scenario possible.

"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."

With dual rendering command pipes, X1's GPU should be able to give back the unused GPU power to devs when system is not using Kinect and GUI snap features.

-------------

3. The context for 7870 XT's 192 GB/s memory bandwidth has nothing to do with X1 and only deals with your "memory operates at X" interpretation.

4. http://semiaccurate.com/2013/08/30/a-deep-dive-in-to-microsofts-xbox-one-gpu-and-on-die-memory/

Microsoft sources SemiAccurate talked to say real world code, the early stuff that is out there anyway, is in the 140-150GBps range, about what you would expec.

204 GB/s is for ESRAM's theoretical bandwidth.

5. 7790's PC benchmarks would NOT reflect X1's results since there are big differences with memory bandwidth period.

Your 7790 selection doesn't come close to replicate 12 WORKING CUs at 853Mhz with real 55GB/s DDR3 + 150 GB/s ESRAM bandwidth.

My view on the closeness between the two boxes are well supported by multiple statements from devs (with their name).

http://www.neogaf.com/forum/showpost.php?p=87073822&postcount=67

You can pack all your crappy theories and move on is been say that COD ghost is 720P on xbox one and 1080p on PS4,you loss.

I was right you were wrong and the xbox one is weak sauce.

But but ESRAM,but but cache but but 204GB/s..

I told you so is 1.18TF those 660Gflops will matter.

#309 Edited by ronvalencia (15109 posts) -

@tormentos said:

@ronvalencia said:

@

1. "Copying and pasting" from SDK examples or source documentation or other materials are part the job LOL. It would be waste of time to hack a GPU driver without documentation.

2. No, MS haven't implemented the two concurrent render pipes for game and system services, hence relying on 10 precent time slicing reservation method.

http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"Xbox One has a conservative 10 per cent time-sliced reservation on the GPU for system processing. This is used both for the GPGPU processing for Kinect and for the rendering of concurrent system content such as snap mode," Microsoft technical fellow Andrew Goossen told us.

"The current reservation provides strong isolation between the title and the system and simplifies game development - strong isolation means that the system workloads, which are variable, won't perturb the performance of the game rendering. In the future, we plan to open up more options to developers to access this GPU reservation time while maintaining full system functionality."

Once you get over the initial surprise that the background system takes up quite so much GPU time in the first place, the notion of being able to give developers access to this resource while not compromising functionality may sound rather like having your cake and eating it, but Microsoft points to particular aspects of the GPU hardware that make this scenario possible.

"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."

With dual rendering command pipes, X1's GPU should be able to give back the unused GPU power to devs when system is not using Kinect and GUI snap features.

-------------

3. The context for 7870 XT's 192 GB/s memory bandwidth has nothing to do with X1 and only deals with your "memory operates at X" interpretation.

4. http://semiaccurate.com/2013/08/30/a-deep-dive-in-to-microsofts-xbox-one-gpu-and-on-die-memory/

Microsoft sources SemiAccurate talked to say real world code, the early stuff that is out there anyway, is in the 140-150GBps range, about what you would expec.

204 GB/s is for ESRAM's theoretical bandwidth.

5. 7790's PC benchmarks would NOT reflect X1's results since there are big differences with memory bandwidth period.

Your 7790 selection doesn't come close to replicate 12 WORKING CUs at 853Mhz with real 55GB/s DDR3 + 150 GB/s ESRAM bandwidth.

My view on the closeness between the two boxes are well supported by multiple statements from devs (with their name).

http://www.neogaf.com/forum/showpost.php?p=87073822&postcount=67

You can pack all your crappy theories and move on is been say that COD ghost is 720P on xbox one and 1080p on PS4,you loss.

I was right you were wrong and the xbox one is weak sauce.

But but ESRAM,but but cache but but 204GB/s..

I told you so is 1.18TF those 660Gflops will matter.

LOL, that's not a confirmation.

There could be a ton of bugs in the OS and in the multi-player framework that causes a really bad user experience all around. They have one month remaining so we'll see.

"There could be" statement indicates how far this poster is from the real CoD Ghost source.

#310 Posted by stereointegrity (10701 posts) -

Ron find us more secret sauce!

#311 Edited by ronvalencia (15109 posts) -

@stereointegrity said:

Ron find us more secret sauce!

There's no secret sauce. I have already posted results from a GCN with 12 CUs @ 860Mhz with greater memory bandwidth than 7790's memory bandwidth.

The prototype 7850 (with 12 CUs at 860Mhz) is slower than the retail 7850 (with 16 CUs at 860 Mhz) and both has the same memory bandwidth.

PS4's GCN is faster than the retail 7850 in both memory bandwidth and CU TFLOPS.

PS4's GCN is faster than the retail 7870 GE in memory bandwidth, but less CU TFLOPS.

#312 Edited by xboxiphoneps3 (2304 posts) -

@Tighaman said:

@tormentos: i also said addons not take aways add Shape add 15 co processors add over 50 mcu and you have a different beast you keep talking about gpu specs but not the balance of the whole system and FYI you can past anything through SHAPE. Like i said you really need to read like the hardware scaler inside the X1 can emulate any GPU and you wont see that on any fanboy site. Another thing there are different CUs they are not all measured the same so stop comparing the 12cu in the x1 to 18 in the ps4

different CU's lol? magical fairy CU's in the xbox one guys you heard it here first, Playstation 4's GPU on paper is superior in just about every away except for maybe one thing

#313 Edited by xboxiphoneps3 (2304 posts) -

@kuu2 said:

@PinkiePirate said:

@kuu2 said:

This whole 50% thing is so hilarious and mimics last gens Cell fiasco that Sony went through.

Not at all. Both systems have a CPU and GPU based on a similar architecture from the same vendor, the PS4 just has a beefier GPU. Not like last gen at all.

Actually they are not based on similar architecture. One GPU is off the shelf and one is not. Very much like last gen.

Try again Sony Fan.

thank you for pointing out that the xbox one GPU is a off the shelf GPU and the other one is not (PS4)

PS4's GPU is much more modified then the Xb1's GPU, try again, self owned

volitale bit cacher, improved compute performance, more ACE's

#314 Posted by jimmypsn (4425 posts) -

MS got dat cloud powa!!

#315 Posted by MonsieurX (29727 posts) -

@jimmypsn said:

MS got dat cloud powa!!

People still believe in this?

#316 Edited by tormentos (17311 posts) -

LOL, that's not a confirmation.

There could be a ton of bugs in the OS and in the multi-player framework that causes a really bad user experience all around. They have one month remaining so we'll see.

"There could be" statement indicates how far this poster is from the real CoD Ghost source.

Hahahaa...Blaming the OS...

Yeah what the excuse for BF4 720p.?

hahaha... You loss.

#317 Posted by tormentos (17311 posts) -

@stereointegrity said:

Ron find us more secret sauce!

There's no secret sauce. I have already posted results from a GCN with 12 CUs @ 860Mhz with greater memory bandwidth than 7790's memory bandwidth.

The prototype 7850 (with 12 CUs at 860Mhz) is slower than the retail 7850 (with 16 CUs at 860 Mhz) and both has the same memory bandwidth.

PS4's GCN is faster than the retail 7850 in both memory bandwidth and CU TFLOPS.

PS4's GCN is faster than the retail 7870 GE in memory bandwidth, but less CU TFLOPS.

Yeah what you fail to say about your precious 12 CU 7850.

Is 32 ROP the xbox one is 16,and is Pitcairn,the xbox one is Bonaire while they are GCN they are different Pitcairn is bigger.

#318 Posted by sukraj (22296 posts) -

the xbox one is a beast.

#319 Edited by ronvalencia (15109 posts) -

@

@tormentos said:

@ronvalencia said:

@stereointegrity said:

Ron find us more secret sauce!

There's no secret sauce. I have already posted results from a GCN with 12 CUs @ 860Mhz with greater memory bandwidth than 7790's memory bandwidth.

The prototype 7850 (with 12 CUs at 860Mhz) is slower than the retail 7850 (with 16 CUs at 860 Mhz) and both has the same memory bandwidth.

PS4's GCN is faster than the retail 7850 in both memory bandwidth and CU TFLOPS.

PS4's GCN is faster than the retail 7870 GE in memory bandwidth, but less CU TFLOPS.

Yeah what you fail to say about your precious 12 CU 7850.

Is 32 ROP the xbox one is 16,and is Pitcairn,the xbox one is Bonaire while they are GCN they are different Pitcairn is bigger.

Did I degrade your precious PS4's toy?

All known CU designs are the same across GCN models with exception of Tahiti. Tahiti has proper support for 64bit DP FP. Across the known GCNs, the CU's byte per cycle rates are the same. I haven't seen any AMD documentation shows different byte per cycle rates for CUs.

Bitmining workloads are small enough that it enables 7790's superior TFLOPS (14 CUs at 1Ghz) to beat 7850.

Without memory bandwidth backing, 32 ROPs are almost pointless.

16 ROPS with read (4 byte) and write (8 byte) requirements can consume about 165 GB/s of raw bandwidth.

PS; 32 bit color / 8 bit = 4 bytes.

This is why 7970's 32 ROPS says hi to PS4's 32 ROPs i.e. PS4's 32 ROPS doesn't have memory bandwidth backing of 7970's 264 GB/s (peak).

The main reason why Pitcairn's core size is larger is mostly due to 20 CU count and 8 32bit memory controllers.

The prototype 7850 has 8 disabled CUs that doesn't contribute to any compute performance.

Your not looking down to byte per cycle rates and your limited to marketing code names.

PS; AMD's GCN documentation includes instruction cycle latencies.

#320 Posted by ronvalencia (15109 posts) -

@kuu2 said:

@PinkiePirate said:

@kuu2 said:

This whole 50% thing is so hilarious and mimics last gens Cell fiasco that Sony went through.

Not at all. Both systems have a CPU and GPU based on a similar architecture from the same vendor, the PS4 just has a beefier GPU. Not like last gen at all.

Actually they are not based on similar architecture. One GPU is off the shelf and one is not. Very much like last gen.

Try again Sony Fan.

thank you for pointing out that the xbox one GPU is a off the shelf GPU and the other one is not (PS4)

PS4's GPU is much more modified then the Xb1's GPU, try again, self owned

volitale bit cacher, improved compute performance, more ACE's

Volatile bit is for making better use on L2 cache (reserve a cache line for compute read/write). Older GCNs has existing tag (write protect tags) or uses GDS.

GCNs like 7970 has more L1 cache (16KBx 32 CU = 512 KB) and LDS (64 KB x 32 CUs = 2MB) SRAM storage.

7970's 2 MB LDS makes 7870 GE's 512 KB L2 cache small. LDS has twice byte per cycle rate as L1 cache.

X1 has a larger 32 MB ESRAM for compute.

Both console boxes are about even in regards to GCN mods i.e. to make do with less designs..

#321 Edited by tormentos (17311 posts) -

@

Did I degrade your precious PS4's toy?

All known CU designs are the same across GCN models with exception of Tahiti. Tahiti has proper support for 64bit DP FP. Across the known GCNs, the CU's byte per cycle rates are the same. I haven't seen any AMD documentation shows different byte per cycle rates for CUs.

Bitmining workloads are small enough that it enables 7790's superior TFLOPS (14 CUs at 1Ghz) to beat 7850.

Without memory bandwidth backing, 32 ROPs are almost pointless.

16 ROPS with read (4 byte) and write (8 byte) requirements can consume about 165 GB/s of raw bandwidth.

PS; 32 bit color / 8 bit = 4 bytes.

This is why 7970's 32 ROPS says hi to PS4's 32 ROPs i.e. PS4's 32 ROPS doesn't have memory bandwidth backing of 7970's 264 GB/s (peak).

The main reason why Pitcairn's core size is larger is mostly due to 20 CU count and 8 32bit memory controllers.

The prototype 7850 has 8 disabled CUs that doesn't contribute to any compute performance.

Your not looking down to byte per cycle rates and your limited to marketing code names.

PS; AMD's GCN documentation includes instruction cycle latencies.

Now prove to me this apply to the xbox one vs PS4.?

The PS4 is stronger than a 7850 and a 7790 flop wise,and the xbox one GPU is weaker than a full 7790 to so is irrelevant.

Save it 32 ROP are on all Pitcairn and Tahiti GCN stop making excuses.

Exactly which is the reason why it can't be Pitcairn,the xbox one has 14 CU with 2 disable the only GCN with those is the 7790.

Hey Titanfall is say to be 720p now to.

So.

BF4 720p

COD Ghost 720p

Titanfall 720p

Killeri Instinct 720p

Hey 2K games will have a conference today who knows maybe NBA2k 720p to..hahaha

#322 Posted by delta3074 (17920 posts) -

@ronvalencia said:

@

Did I degrade your precious PS4's toy?

All known CU designs are the same across GCN models with exception of Tahiti. Tahiti has proper support for 64bit DP FP. Across the known GCNs, the CU's byte per cycle rates are the same. I haven't seen any AMD documentation shows different byte per cycle rates for CUs.

Bitmining workloads are small enough that it enables 7790's superior TFLOPS (14 CUs at 1Ghz) to beat 7850.

Without memory bandwidth backing, 32 ROPs are almost pointless.

16 ROPS with read (4 byte) and write (8 byte) requirements can consume about 165 GB/s of raw bandwidth.

PS; 32 bit color / 8 bit = 4 bytes.

This is why 7970's 32 ROPS says hi to PS4's 32 ROPs i.e. PS4's 32 ROPS doesn't have memory bandwidth backing of 7970's 264 GB/s (peak).

The main reason why Pitcairn's core size is larger is mostly due to 20 CU count and 8 32bit memory controllers.

The prototype 7850 has 8 disabled CUs that doesn't contribute to any compute performance.

Your not looking down to byte per cycle rates and your limited to marketing code names.

PS; AMD's GCN documentation includes instruction cycle latencies.

Now prove to me this apply to the xbox one vs PS4.?

The PS4 is stronger than a 7850 and a 7790 flop wise,and the xbox one GPU is weaker than a full 7790 to so is irrelevant.

Save it 32 ROP are on all Pitcairn and Tahiti GCN stop making excuses.

Exactly which is the reason why it can't be Pitcairn,the xbox one has 14 CU with 2 disable the only GCN with those is the 7790.

Hey Titanfall is say to be 720p now to.

So.

BF4 720p

COD Ghost 720p

Titanfall 720p

Killeri Instinct 720p

Hey 2K games will have a conference today who knows maybe NBA2k 720p to..hahaha

None of those resolutions are confirmed, just rumours.

Hint: Rumours are not fact dude. RON is posting facts, you are posting Rumours as counter evidence, Give it up, dude Clearly knows more about it than you, you have yet to provide any Solid evidence to counter his evidence.

#323 Edited by cainetao11 (16998 posts) -

@cainetao11 said:

@_Matt_ said:

That secret sauce. Definitely this gen's equivalent of TEH CELL.

I was thinking that as well.

Probably because you ignore what Cell did,and you ignore how things are way different this time around.

The PS4 is stronger and developers don't nee to do magic to get performance from it,even unoptimized any code drop on PS4 should run faster than on xbox one.

I'm all ears Sony lover. What did the cell do that was so great, that they aren't using it again?

#324 Edited by I_can_haz (6551 posts) -

Dem 720p rumors are causing a lot if meltdowns lately in the Cbone camp lol. I've never seen so much damage control in a single thread,

#325 Posted by delta3074 (17920 posts) -

Dem 720p rumors are causing a lot if meltdowns lately in the Cbone camp lol. I've never seen so much damage control in a single thread,

You should check out the other thread where the creator of resident evil says both consoles are roughly equal then,lol

#326 Posted by ronvalencia (15109 posts) -

@

@ronvalencia said:

@

Did I degrade your precious PS4's toy?

All known CU designs are the same across GCN models with exception of Tahiti. Tahiti has proper support for 64bit DP FP. Across the known GCNs, the CU's byte per cycle rates are the same. I haven't seen any AMD documentation shows different byte per cycle rates for CUs.

Bitmining workloads are small enough that it enables 7790's superior TFLOPS (14 CUs at 1Ghz) to beat 7850.

Without memory bandwidth backing, 32 ROPs are almost pointless.

16 ROPS with read (4 byte) and write (8 byte) requirements can consume about 165 GB/s of raw bandwidth.

PS; 32 bit color / 8 bit = 4 bytes.

This is why 7970's 32 ROPS says hi to PS4's 32 ROPs i.e. PS4's 32 ROPS doesn't have memory bandwidth backing of 7970's 264 GB/s (peak).

The main reason why Pitcairn's core size is larger is mostly due to 20 CU count and 8 32bit memory controllers.

The prototype 7850 has 8 disabled CUs that doesn't contribute to any compute performance.

Your not looking down to byte per cycle rates and your limited to marketing code names.

PS; AMD's GCN documentation includes instruction cycle latencies.

Now prove to me this apply to the xbox one vs PS4.?

The PS4 is stronger than a 7850 and a 7790 flop wise,and the xbox one GPU is weaker than a full 7790 to so is irrelevant.

Save it 32 ROP are on all Pitcairn and Tahiti GCN stop making excuses.

Exactly which is the reason why it can't be Pitcairn,the xbox one has 14 CU with 2 disable the only GCN with those is the 7790.

Hey Titanfall is say to be 720p now to.

So.

BF4 720p

COD Ghost 720p

Titanfall 720p

Killeri Instinct 720p

Hey 2K games will have a conference today who knows maybe NBA2k 720p to..hahaha

You applied PC's 7790 fps result for X1 and didn't factor in 7790's memory bandwidth differences with X1. I not disputing then general PS4 > X1, I'm disputing your methods with PS4 vs X1.

At the active CU count, the prototype 7850 has 12 working CUs at 860Mhz is very close to X1's 12 working CUs at 853Mhz i.e. similar the wavefront buffer size, LDS size, LDS byte cycle rates, L1 cache size, L1 cache byte cycle rates, TMU load/store rates and 'etc'.

32 ROPS doesn't operate in isolation and ROP units are not very useful when it doesn't have memory space for writing results and reading data. Your are ignorant with basic color fill rate's operations i.e. it needs memory writes performance. ROPS doesn't fill binary data to a black hole i.e. it needs to fill binary data to some bucket and that bucket is memory.

--

BF4, unconfirmed but PS4 in is similar non-1080p boat.

COD Ghost, unconfirmed, conflicting info. From http://www.gamespot.com/forums/system-wars-314159282/cod-on-xbox-one-will-run-at-1080p-60fps-30875337/#54 it has 1080p info for X1.

Titanfall, unconfirmed

NBA2K, 1080 at ~60 fps http://uk.ign.com/articles/2013/10/23/nba-2k14-runs-at-60fps-native-1080p-on-xbox-one-and-ps4?+main+twitter

http://www.gamepur.com/news/12532-titanfall-and-cod-ghosts-runs-720p-leaker-issue-clarification-also-comment-.html

#327 Posted by ronvalencia (15109 posts) -

@ronvalencia said:

LOL, that's not a confirmation.

There could be a ton of bugs in the OS and in the multi-player framework that causes a really bad user experience all around. They have one month remaining so we'll see.

"There could be" statement indicates how far this poster is from the real CoD Ghost source.

Hahahaa...Blaming the OS...

Yeah what the excuse for BF4 720p.?

hahaha... You loss.

In regards to OS bugs, it's your link. All I did is to question the statements made in your posted link.

I wasn't using the "There could be a ton of bugs in the OS and in the multi-player framework that causes a really bad user experience all around. They have one month remaining so we'll see" statement as an excuse but question it's validity LOL.

#328 Posted by tormentos (17311 posts) -

None of those resolutions are confirmed, just rumours.

Hint: Rumours are not fact dude. RON is posting facts, you are posting Rumours as counter evidence, Give it up, dude Clearly knows more about it than you, you have yet to provide any Solid evidence to counter his evidence.

Yeah Ron knows so much he has months denying that the xbox one GPU is 7790 Bonaire,while i have months telling him that it was bonaire with 2 CU disable for yields.

I have months telling him that there is a 10% GPU reservation which he wanted to ignore even that links were presented to him.

You look stupid Ron will bring completely irrelevant crap to win an argument,like when he bring the 7870 XT on PC like if the xbox one had that GPU..lol

And about the rumors isn't that what you lemming had been fighting me about since last year.? That the specs were rumor,that the online DRM policies were rumor,that Kinect included were rumors,that no used games were rumors..

What happen.? All got confirmed from the weak specs to the policies and kinect all,MS is just unable to keep something secret any more.

But it is not like there are no signals all xbox one version of multiplatforms are been hidden when the PS4 version are shown,developers choosing to show PS4 version of games,even Activision that for years and still now has contracts with MS is showing the PS4 version and not the xbox one version,dude the xbox is underpower deal with it..

I'm all ears Sony lover. What did the cell do that was so great, that they aren't using it again?

First of all Cell is not been use because sony is no financial position to do something like it again.

Cell actually helped a weak GPU match an surpass a console with a stronger GPU,and more dev friendly design.

So irrelevant of what you may think,Cell actually helped the PS3 to stay close and in several cases surpass the xbox 360 graphically.

#329 Edited by tormentos (17311 posts) -

@I_can_haz said:

Dem 720p rumors are causing a lot if meltdowns lately in the Cbone camp lol. I've never seen so much damage control in a single thread,

You should check out the other thread where the creator of resident evil says both consoles are roughly equal then,lol

If they were equal the xbox one version of COD would look exactly like the PS4 and run at the same frame to.

That Mikami quote will come back to hunt him a little latter on when the difference becomes even more apparent.

Again a 6% GPU increase doesn't make out for 660Gflops,the xbox one has 10% GPU reservation and not even that MS could make transparent with that update.

@

You applied PC's 7790 fps result for X1 and didn't factor in 7790's memory bandwidth differences with X1. I not disputing then general PS4 > X1, I'm disputing your methods with PS4 vs X1.

At the active CU count, the prototype 7850 has 12 working CUs at 860Mhz is very close to X1's 12 working CUs at 853Mhz i.e. similar the wavefront buffer size, LDS size, LDS byte cycle rates, L1 cache size, L1 cache byte cycle rates, TMU load/store rates and 'etc'.

32 ROPS doesn't operate in isolation and ROP units are not very useful when it doesn't have memory space for writing results and reading data. Your are ignorant with basic color fill rate's operations i.e. it needs memory writes performance. ROPS doesn't fill binary data to a black hole i.e. it needs to fill binary data to some bucket and that bucket is memory.

--

BF4, unconfirmed but PS4 in is similar non-1080p boat.

COD Ghost, unconfirmed, conflicting info. From http://www.gamespot.com/forums/system-wars-314159282/cod-on-xbox-one-will-run-at-1080p-60fps-30875337/#54 it has 1080p info for X1.

Titanfall, unconfirmed

NBA2K, 1080 at ~60 fps http://uk.ign.com/articles/2013/10/23/nba-2k14-runs-at-60fps-native-1080p-on-xbox-one-and-ps4?+main+twitter

http://www.gamepur.com/news/12532-titanfall-and-cod-ghosts-runs-720p-leaker-issue-clarification-also-comment-.html

First of all you can't i repeat you can't talk about ignoring when you your self want to ignore a damn 660Gflops advantage on the PS4.

The xbox one GPU is 1.31 TF with 10% reservation,is 1.18 TF period you don't need 140Gb/s for 1.18TF period,so while you want to take into account the memory and bandwidth of the xbox one vs the full 7790,you want to completely ignore its disadvantages like been 12 CU 853 mhz,instead of 14 CU 1000mhz,or 1.18 TF vs 1.79TF on the 7790.

The xbox one GPU is weak and handicap by a reservation you want to ignore that and you can't period.

The xbox one doesn't have a 7850 period you have no link just your opinion and there is no 14 CU pitcairn on PC period,it is Bonaire and is gimped.

Oh and NBA2k14 running at 1080p 60 FPS on both versions mean nothing,there are different quality setups that require different power,and you know this for fact low quality is not as demanding as high quality is.

#330 Edited by always_explicit (2737 posts) -

@tormentos: Yeah but the Xbox is better.

Resident evil developer said there is no difference. This is INDISPUTABLE FACT TORMENTOS.

#331 Posted by tormentos (17311 posts) -

@tormentos: Yeah but the Xbox is better.

Resident evil developer said there is no difference. This is INDISPUTABLE FACT TORMENTOS.

Yes indisputable until you see COD 720p on xbox one and 1080p on PS..hahaha

#332 Edited by TitanExtremer (66 posts) -

@tormentos said:

@always_explicit said:

@tormentos: Yeah but the Xbox is better.

Resident evil developer said there is no difference. This is INDISPUTABLE FACT TORMENTOS.

Yes indisputable until you see COD 720p on xbox one and 1080p on PS..hahaha

Let me tell you about my first experience with the Playstation 4 real quick.

This weekend, we went to Berlin on a weekend holiday.

In Berlin they have this huge Electronic supermarket in the middle of town, it´s called Saturn.

Saturn was one of the first stores in DE to have the Playstation 4 on display and available for play, so we went there one of the afternoons.

Unfortunately i could not try playing on the PS4 because it had gone BOOM on them after only two days, done, finished, dead.

I could however try holding the controllers and have a nice look at it, and take this nice picture.

I don't know about you,but i am not paying 400€ for a paper weight, no matter how sexy it may look.

#333 Edited by not_wanted (1977 posts) -

@FoxbatAlpha: You fail hard, since when was Halo 4 running at 1080p/60fps? Go away you're drunk.

Thread should be renamed "The Real power of 720p on the X1" lol

Xbox One should be renamed to Xbox 720p. lol

#334 Edited by Tighaman (777 posts) -

yall keep on talking about 1.3tfs for the gpu but if you look at the hotchips diagram the gpu is divided into 3 sections and that 1.3 tfs is for gpgpu only you have to understand what gtexels are to get the rest of the answer, and again the eurogamer artical the MS DEV SAID THEY WHERE USING 12 CUs and have TWO MORE to use later AND TWO FOR FAILURE =16 CUs stop going buy the old way and start looking toward the future this is not a 7790 a 7770 or a protype 7850 something newer than all three

#335 Posted by -Unreal- (24538 posts) -

I thought it was common knowledge that when computer hardware is encased in a chasis with "Playstation" or "Xbox" printed on it, it has unlocked, untapped potential and can do more than the hardware inside is mathematically capable of doing.

#336 Edited by tormentos (17311 posts) -

Let me tell you about my first experience with the Playstation 4 real quick.

This weekend, we went to Berlin on a weekend holiday.

In Berlin they have this huge Electronic supermarket in the middle of town, it´s called Saturn.

Saturn was one of the first stores in DE to have the Playstation 4 on display and available for play, so we went there one of the afternoons.

Unfortunately i could not try playing on the PS4 because it had gone BOOM on them after only two days, done, finished, dead.

I could however try holding the controllers and have a nice look at it, and take this nice picture.

I don't know about you,but i am not paying 400€ for a paper weight, no matter how sexy it may look.

Yeah we all believe you...lol

@Tighaman said:

yall keep on talking about 1.3tfs for the gpu but if you look at the hotchips diagram the gpu is divided into 3 sections and that 1.3 tfs is for gpgpu only you have to understand what gtexels are to get the rest of the answer, and again the eurogamer artical the MS DEV SAID THEY WHERE USING 12 CUs and have TWO MORE to use later AND TWO FOR FAILURE =16 CUs stop going buy the old way and start looking toward the future this is not a 7790 a 7770 or a protype 7850 something newer than all three

Hahaha..........

Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing.

http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

Liar liar your pants on fire..

Is 14 CU 2 disable for redundancy for a total of 1.31 TF and when you take the 10% GPU reservation is only 1.18 TF even less.

Your not even trying.

#337 Edited by shawn30 (4364 posts) -

@tormentos: Yeah but the Xbox is better.

Resident evil developer said there is no difference. This is INDISPUTABLE FACT TORMENTOS.

LMAO!! awesome picture, lol.

#338 Edited by Szminsky (1433 posts) -

Like a flock of children...

#339 Edited by ronvalencia (15109 posts) -

@tormentos:

First of all you can't i repeat you can't talk about ignoring when you your self want to ignore a damn 660Gflops advantage on the PS4.

That's relatively small when you consider ex-flagship 7970 GEs has TFLOPS(4.3 TFLOPS) against 7870 GE (~2.5 TFLOPS).

The xbox one GPU is 1.31 TF with 10% reservation,is 1.18 TF period you don't need 140Gb/s for 1.18TF period

I have debunked this via 12 CU x L1 cache 64 bytes per cycle rate and that's NOT including 12 CU x LDS's 128 bytes per cycle rate. It's confirmed that you can't do proper math calculations period.

There are reasons why PC's 7790 can't fully use faster memory i.e. hint, the L2 cache and 128bit wide memory controller design factors and the prototype 7850 doesn't have 7790's L2 and memory controller limitations.

#340 Edited by Shewgenja (8512 posts) -

@ronvalencia said:

@tormentos:

First of all you can't i repeat you can't talk about ignoring when you your self want to ignore a damn 660Gflops advantage on the PS4.

That's relatively small when you consider ex-flagship 7970 GEs has TFLOPS(4.3 TFLOPS) against 7870 GE (~2.5 TFLOPS).

The xbox one GPU is 1.31 TF with 10% reservation,is 1.18 TF period you don't need 140Gb/s for 1.18TF period

I have debunked this via 12 CU x L1 cache 64 bytes per cycle rate and that's NOT including 12 CU x LDS's 128 bytes per cycle rate. It's confirmed that you can't do proper math calculations period.

There are reasons why PC's 7790 can't fully use faster memory i.e. hint, the L2 cache and 128bit wide memory controller design factors and the prototype 7850 doesn't have 7790's L2 and memory controller limitations.

It might be small when comparing PCs that are designed to push multiple 4K monitors, but when comparing consoles that are pushing a single 1080p monitor for gaming purposes that's a whole other bag. The systems need to be compared in relation to each other, not some PC tech. The systems do much different things and have different technical demands.

also, no one wants a $600 console.

#341 Edited by ronvalencia (15109 posts) -

@Shewgenja said:

@ronvalencia said:

@tormentos:

First of all you can't i repeat you can't talk about ignoring when you your self want to ignore a damn 660Gflops advantage on the PS4.

That's relatively small when you consider ex-flagship 7970 GEs has TFLOPS(4.3 TFLOPS) against 7870 GE (~2.5 TFLOPS).

The xbox one GPU is 1.31 TF with 10% reservation,is 1.18 TF period you don't need 140Gb/s for 1.18TF period

I have debunked this via 12 CU x L1 cache 64 bytes per cycle rate and that's NOT including 12 CU x LDS's 128 bytes per cycle rate. It's confirmed that you can't do proper math calculations period.

There are reasons why PC's 7790 can't fully use faster memory i.e. hint, the L2 cache and 128bit wide memory controller design factors and the prototype 7850 doesn't have 7790's L2 and memory controller limitations.

It might be small when comparing PCs that are designed to push multiple 4K monitors, but when comparing consoles that are pushing a single 1080p monitor for gaming purposes that's a whole other bag. The systems need to be compared in relation to each other, not some PC tech. The systems do much different things and have different technical demands.

also, no one wants a $600 console.

AMD Mantle brings "console like API" to the PC.

------

Building a better AMD PC box.

PCPartPicker part list: http://pcpartpicker.com/p/1TL88

CPU: AMD Athlon II X4 740 3.2GHz Quad-Core Processor ($67.00 @ Amazon)

Motherboard: MSI FM2-A55M-E33 Micro ATX FM2 Motherboard ($34.99 @ Newegg)

Memory: GeIL EVO Veloce Series 8GB (2 x 4GB) DDR3-2133 Memory ($61.99 @ Newegg)

Storage: Hitachi 500GB 3.5" 7200RPM Internal Hard Drive ($49.99 @ NCIX US)

Video Card: PowerColor Radeon HD 7870 XT 2GB Video Card ($139.99 @ Newegg)

Case: Sentey CS1-1420 PLUS ATX Mid Tower Case ($19.99 @ Newegg)

Power Supply: CoolMax 500W 80 PLUS Certified ATX12V / EPS12V Power Supply ($24.99 @ Newegg)

Optical Drive: Lite-On iHAS124-04 DVD/CD Writer ($14.98 @ Outlet PC)

Operating System: Microsoft Windows 7 Home Premium SP1 (OEM) (64-bit) ($84.99 @ NCIX US)

Total: $498.91

(Prices include shipping, taxes, and discounts when available.)

(Generated by PCPartPicker 2013-10-28 02:24 EDT-0400)

-----------

Estimated Wattage: 340 watts. The Greens/tree huggers are running both Sony and MS.

PS; I can get cheaper Windows license from other sites.

Radeon HD 7870 XT is a 2.995 TFLOPS GPU.

#342 Posted by Multipass35 (295 posts) -