Xbox One Games' Performance, Reduces CPU Utilization By 50%

  • 129 results
  • 1
  • 2
  • 3

This topic is locked from further discussion.

#101 Posted by jsmoke03 (12986 posts) -

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

they did with their first party games. naughty dog, sony santa monica, guerilla and polyphony had amazing looking games. multi plats on the other hand were meh

#102 Posted by foxhound_fox (89427 posts) -

@jsmoke03 said:

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

they did with their first party games. naughty dog, sony santa monica, guerilla and polyphony had amazing looking games. multi plats on the other hand were meh

Uh... they weren't dual 1080p @ 120 FPS in 4D.

They were nice looking games, certainly, but they weren't leagues ahead of the 360, which is what Sony staffers were trying to pull out of their asses.

#103 Posted by stereointegrity (10742 posts) -

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

sense...this makes none

#104 Edited by LanceX2 (305 posts) -

its not about secret sauce or hidden power. DX 12 will enhance X1 considerably but lets be honest, it wont make it do what its hardware cant do.

#105 Posted by kuu2 (7518 posts) -

@daious said:

@kuu2 said:

@Wasdie: knows more than Phil Spencer now............... Must be those mod super powers he possesses. Tell us more oh enlightened one.

Consoles already have low level access+API...

Its not going to be a huge difference. The main impact of DX12 will be seen on PC which do not have this luxury except for Mantle (which is an AMD solution)

Opengl is going to have huge improvements too. Is this going to make a huge difference on the PS4? Likely not. Will it have a bigger impact on PC performance? Yes

"NVIDIA, AMD, Intel Explain How OpenGL Can Unlock 7-15x Performance Gains" via Nvidia article

Do you think the PS4 is going to get 7-15x performance gains? Because if you believe DX12 is going to get huge gains on Xboxone then you must then believe the same for the PS4 with Opengl

Why is this hard to understand?

Man, all kinds of Jr. Hardware/Software engineers on here. Can you quantify what this means, "Its not going to be a huge difference." It sounds very vague and just another attempt to downplay things you have no idea about. Hard numbers please and why.

#106 Edited by hehe101 (783 posts) -

This is great news looks like all the big names Microsoft have planned will also take advantage of this.

#107 Posted by ronvalencia (15176 posts) -

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

#108 Posted by blackace (21081 posts) -

@Mr-Kutaragi said:

Dat secret sauce, dat 12GB ram, dat discreet gpu, dat cloud, dat kinect, dat esram, dat dx12.

dat ignorance. lol!! dat lack of intelligence.

#109 Posted by ReadingRainbow4 (14686 posts) -

@stereointegrity said:

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

sense...this makes none

yup.

If he was actually expecting games at 1080p, 200fps I question his mental state.

#110 Posted by stereointegrity (10742 posts) -

@blackace said:

@Mr-Kutaragi said:

Dat secret sauce, dat 12GB ram, dat discreet gpu, dat cloud, dat kinect, dat esram, dat dx12.

dat ignorance. lol!! dat lack of intelligence.

dat love of misterx

#111 Posted by blackace (21081 posts) -

@Krelian-co said:

@blackace said:

@santoron said:

@blackace: that's a heaping helping of denial. Rewriting what companies mean when they give. "Holiday" release window? Hilarious.

We'll see how hilarious it is come this holiday when games are released with Dx12 support already in the games. lol!!

rofl they are just developing directx12, they said it wouldn't be ready until the end of 2015 and you think they will release dx12 games anytime soon? lems taking delusion to a whole new level.


Keep laughing. lol!!

#112 Posted by blackace (21081 posts) -

@jsmoke03 said:

@Opus_Rea-333 said:

Instead of listening to PS4 drones with some PC hardware knowledge inherited from google saying the impact will be minimal,

Listen to MSoft themselves. Phil Spencer knows Xbox One hardware architecture more than you do.

buh buh the bonaire.

you want me to listen to someone who works for a company thats trying to sell you on the product? wow i hope you dont watch commercials

So we shouldn't listen to Sony, Nintendo, Apple or any other company trying to sell us their products, right? We should just listen to you? LOL!! Do us all a favor and STFU. Just delete your account and step away from the keyboard. Wow.., I hope you never make another post.

#113 Posted by foxhound_fox (89427 posts) -

@ReadingRainbow4 said:

@stereointegrity said:

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

sense...this makes none

yup.

If he was actually expecting games at 1080p, 200fps I question his mental state.

Other than the 4D, that's all possible on PC.

#114 Edited by ReadingRainbow4 (14686 posts) -

@foxhound_fox said:

@ReadingRainbow4 said:

@stereointegrity said:

@foxhound_fox said:

Would be really funny if MS could "unlock teh hidden powah!" when Sony couldn't last gen despite many claims otherwise.

sense...this makes none

yup.

If he was actually expecting games at 1080p, 200fps I question his mental state.

Other than the 4D, that's all possible on PC.

But was it really possible on 2006 console hardware? really? Also, it begs the question why you would even want 200fps.

Sorry but your comments just make you seem bitter.

#115 Posted by foxhound_fox (89427 posts) -

@ReadingRainbow4 said:

But was it really possible on 2006 console hardware? really? Also, it begs the question why you would even want 200fps.

120 fps, not 200. The whole idea I believe was that they were shooting for dual displays, or likely as is the case, dual 1080p streams feeding their 3D technology. They likely had been developing it for a while already at that point and dual 1080p 3D at 120fps was their goal for the hardware.

They just never met it and tried so hard to make cows and the rest of the gaming industry that they should "just wait" and it'll surely happen.

#116 Posted by blackace (21081 posts) -

@stereointegrity said:

@blackace said:

@Mr-Kutaragi said:

Dat secret sauce, dat 12GB ram, dat discreet gpu, dat cloud, dat kinect, dat esram, dat dx12.

dat ignorance. lol!! dat lack of intelligence.

dat love of misterx

I don't think anyone loves misterx. lol!! We all know that Microsoft is hiding something. How do we know? Because there are developers who have spoken to Microsoft and can't tell us what they know.

#117 Posted by stereointegrity (10742 posts) -

@blackace said:

@stereointegrity said:

@blackace said:

@Mr-Kutaragi said:

Dat secret sauce, dat 12GB ram, dat discreet gpu, dat cloud, dat kinect, dat esram, dat dx12.

dat ignorance. lol!! dat lack of intelligence.

dat love of misterx

I don't think anyone loves misterx. lol!! We all know that Microsoft is hiding something. How do we know? Because there are developers who have spoken to Microsoft and can't tell us what they know.

what developers

#118 Posted by magicalclick (23033 posts) -

Thanks for the summary. It is informative. Can't wait to see it improves XboxOne.

#119 Posted by scatteh316 (4980 posts) -

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

#120 Edited by Daious (1692 posts) -

@kuu2 said:

@daious said:

@kuu2 said:

@Wasdie: knows more than Phil Spencer now............... Must be those mod super powers he possesses. Tell us more oh enlightened one.

Consoles already have low level access+API...

Its not going to be a huge difference. The main impact of DX12 will be seen on PC which do not have this luxury except for Mantle (which is an AMD solution)

Opengl is going to have huge improvements too. Is this going to make a huge difference on the PS4? Likely not. Will it have a bigger impact on PC performance? Yes

"NVIDIA, AMD, Intel Explain How OpenGL Can Unlock 7-15x Performance Gains" via Nvidia article

Do you think the PS4 is going to get 7-15x performance gains? Because if you believe DX12 is going to get huge gains on Xboxone then you must then believe the same for the PS4 with Opengl

Why is this hard to understand?

Man, all kinds of Jr. Hardware/Software engineers on here. Can you quantify what this means, "Its not going to be a huge difference." It sounds very vague and just another attempt to downplay things you have no idea about. Hard numbers please and why.

Seriously... If you think the new DX is going to make a huge difference on xboxone then you must think the new updated OpenGL will do the same thing for the PS4. Neither will. I know this is hard for you to cope with. And you talking about vague? Look at Spencer's twitter post. He just stated there will be an impact which obviously is true but nothing else. The biggest change you are going to see from this is likely that more xboxone games will be 1080p and limit at least the resolution disparity between the two consoles. It will make way bigger differences on PC.

consoles already have low level API. They do not have the driver overhead like PC. That is a 100% fact. There is no disputing that. Is this really hard to acknowledge? If consoles didn't have low level access, then Amd's Mantle would have been just as huge for consoles as it will be for PC (with amd hardware). There is a reason Mantle does not exist on the PS4 and Xboxone even though they both run on AMD hardware.

If you notice the source of the reduced 50% usage. It was referring to PC and a benchmarking tool for PC users (which does not exist and run on consoles).

#121 Edited by ronvalencia (15176 posts) -

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

My statements are inline with AMD Gaming Evolved with Mantle developer Rebellion. The clueless is you.

Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

To clarify, my "closing the gap" POV doesn't automatically means equal or exceed PS4. At best (with tiling tricks for memory related workaround), X1 runs like prototype 7850 with 12 CU/1.32 TFLOPS + 153.6 GB/s memory bandwidth and it's still inferior to retail 7850 and R7-265 (closest to PS4).

R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > retail 7850 (1.76 TFLOPS + 153.6 GB/s memory bandwidth) > prototype 7850 (1.32 TFLOPS + 153.6 GB/s memory bandwidth).

----

Rebellion is AMD Gaming Evolved Mantle developer. http://www.rebellion.co.uk/blog/2013/11/21/rebellion-throws-weight-behind-amd-mantle

Rebellion > you or any cows.

#122 Posted by scatteh316 (4980 posts) -

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

My statements are inline with AMD Gaming Evolved with Mantle developer Rebellion. The clueless is you.

Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

To clarify, my "closing the gap" POV doesn't automatically means equal or exceed PS4. At best (with tiling tricks for memory related workaround), X1 runs like prototype 7850 with 12 CU/1.32 TFLOPS + 153.6 GB/s memory bandwidth and it's still inferior to retail 7850 and R7-265 (closest to PS4).

R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > retail 7850 (1.76 TFLOPS + 153.6 GB/s memory bandwidth) > prototype 7850 (1.32 TFLOPS + 153.6 GB/s memory bandwidth).

----

Rebellion is AMD Gaming Evolved Mantle developer. http://www.rebellion.co.uk/blog/2013/11/21/rebellion-throws-weight-behind-amd-mantle

Rebellion > you or any cows.

Hahahahaha.... once again deluded...

Tling will not solve anything, it caused such a ball ache with 360 that hardly anyone bothered, not to mention that you get a performance drop from assets being in the overlapped part of a tile.

Xbone will NEVER perform close to PS4..... Just the sheer FACT that PS4 has 50% more fill rate alone means Xbone will never catch it.

And Rebellion are respected dev's worth quoting now? Why? Because of what they said? lol

#123 Edited by ronvalencia (15176 posts) -
@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

My statements are inline with AMD Gaming Evolved with Mantle developer Rebellion. The clueless is you.

Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

To clarify, my "closing the gap" POV doesn't automatically means equal or exceed PS4. At best (with tiling tricks for memory related workaround), X1 runs like prototype 7850 with 12 CU/1.32 TFLOPS + 153.6 GB/s memory bandwidth and it's still inferior to retail 7850 and R7-265 (closest to PS4).

R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > retail 7850 (1.76 TFLOPS + 153.6 GB/s memory bandwidth) > prototype 7850 (1.32 TFLOPS + 153.6 GB/s memory bandwidth).

----

Rebellion is AMD Gaming Evolved Mantle developer. http://www.rebellion.co.uk/blog/2013/11/21/rebellion-throws-weight-behind-amd-mantle

Rebellion > you or any cows.

Hahahahaha.... once again deluded...

Tling will not solve anything, it caused such a ball ache with 360 that hardly anyone bothered, not to mention that you get a performance drop from assets being in the overlapped part of a tile.

Xbone will NEVER perform close to PS4..... Just the sheer FACT that PS4 has 50% more fill rate alone means Xbone will never catch it.

And Rebellion are respected dev's worth quoting now? WhyI? Because of what they said? lol

Define "close".

1. "Tiling tricks" will enable TMUs to fetch textures from the fastest memory pool i.e. treat 32MB ESRAM as texture cache.

2. "Tiling tricks" will enable ROPS to read and write on the fastest memory pool i.e. treat 32MB ESRAM read/write cache.

3. I have defined my "close" as prototype 7850 with 1.32 TFLOPS with 153.6 GB/s memory bandwidth against R7-265 with 1.89 TFLOPS with 179 GB/s memory bandwidth.

4. Both 7770 and 7790/R7-260X doesn't have the option to scale towards 153.6 GB/s memory bandwidth.

5. I have shown R9-290 at 1Ghz being reduce to near X1 levels with Tomb Raider 2013 and 69 GB/s video memory bandwidth. AMD's Catalyst driver 14.1 restores clock speed based down-clocking i.e. I can target any video memory bandwidth from 38.4 GB/s to 320 GB/s and run Mantle i.e. the "console like APIs".

6. Xbox 360's connection between the GPU and EDRAM is limited to 32 GB/s which is different for Xbox One i.e. 204 GB/s.

7. Xbox 360's GPU's ROPS can't directly access main GDDR3 memory.

8. Xbox 360's GPU doesn't have hardware AMD PRT/Tiled Resource.

9. Xbox 360's GPU has pixel format issues that limits certain effects and causes excess memory usage e.g. double HDR frame buffer due to lower HDR FP 10bit precision format. You don't understand the difference between Xbox 360 and Xbox One.

#124 Posted by scatteh316 (4980 posts) -

@ronvalencia said:
@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

My statements are inline with AMD Gaming Evolved with Mantle developer Rebellion. The clueless is you.

Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

To clarify, my "closing the gap" POV doesn't automatically means equal or exceed PS4. At best (with tiling tricks for memory related workaround), X1 runs like prototype 7850 with 12 CU/1.32 TFLOPS + 153.6 GB/s memory bandwidth and it's still inferior to retail 7850 and R7-265 (closest to PS4).

R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > retail 7850 (1.76 TFLOPS + 153.6 GB/s memory bandwidth) > prototype 7850 (1.32 TFLOPS + 153.6 GB/s memory bandwidth).

----

Rebellion is AMD Gaming Evolved Mantle developer. http://www.rebellion.co.uk/blog/2013/11/21/rebellion-throws-weight-behind-amd-mantle

Rebellion > you or any cows.

Hahahahaha.... once again deluded...

Tling will not solve anything, it caused such a ball ache with 360 that hardly anyone bothered, not to mention that you get a performance drop from assets being in the overlapped part of a tile.

Xbone will NEVER perform close to PS4..... Just the sheer FACT that PS4 has 50% more fill rate alone means Xbone will never catch it.

And Rebellion are respected dev's worth quoting now? WhyI? Because of what they said? lol

Define "close".

1. "Tiling tricks" will enable TMUs to fetch textures from the fastest memory pool i.e. treat 32MB ESRAM as texture cache.

2. "Tiling tricks" will enable ROPS to read and write on the fastest memory pool i.e. treat 32MB ESRAM read/write cache.

3. I have defined my "close" as prototype 7850 with 1.32 TFLOPS with 153.6 GB/s memory bandwidth against R7-265 with 1.89 TFLOPS with 179 GB/s memory bandwidth.

4. Both 7770 and 7790/R7-260X doesn't have the option to scale towards 153.6 GB/s memory bandwidth.

5. I have shown R9-290 at 1Ghz being reduce to near X1 levels with Tomb Raider 2013 and 69 GB/s video memory bandwidth. AMD's Catalyst driver 14.1 restores clock speed based down-clocking i.e. I can target any video memory bandwidth from 38.4 GB/s to 320 GB/s and run Mantle i.e. the "console like APIs".

6. Xbox 360's connection between the GPU and EDRAM is limited to 32 GB/s which is different for Xbox One i.e. 204 GB/s.

7. Xbox 360's GPU's ROPS can't directly access main GDDR3 memory.

8. Xbox 360's GPU doesn't have hardware AMD PRT/Tiled Resource.

9. Xbox 360's GPU has pixel format issues that limits certain effects and causes excess memory usage e.g. double HDR frame buffer due to lower HDR FP 10bit precision format. You don't understand the difference between Xbox 360 and Xbox One.

1. It still has 50% less TMU's then PS4 and even if it manages 100% use on them it'll STILL never be close to PS4

2. ROPs can read and write where ever they want, Xbone will STILL has a lot less fillrate then PS4 and because this Xbone will ALWAYS struggle with 1080p games cwhen compared to PS4

3. Stop using prototype crap as examples

4. No they don't but it's all dedicated and doesn't have to be shared with a CPU, unlike the consoles GPU's.

5. And?

6. Xbone has more pixels and shader units to feed, so it needs more

7. They didn't need too

8. Megatexture

9. Again, and?

Microsoft can release all the tweaks they want, it STILL will not change the FACT that PS4 has 50% more TMU performance, 50% more shader performance and 100% more Fillrate.

#125 Edited by kuu2 (7518 posts) -

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

#126 Edited by ronvalencia (15176 posts) -

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

@kuu2 said:

Looks like DX12 will be a game changer for The One............can't wait for devs to take advantage.

No it won't.... DX12 will hardly be used by Developers as they'll have there own tools which will more then likely give a lower access to the hardware then DX12 will give them.

DX12 was built with PC in mind and Xbone as an after thought...

^Can you please give me a link where I can verify that? Or else you're just, you know, a liar.

It's called common sense and intelligence, which judging by your post you have neither.

Or are you one of these blind fan biy type's who who can't sleep at night knowing they have an inferior product compared to everyone else?

Xbone just has weak shitty hardware and no API or new DX is going to change that/

No, the new DX will close the gap with PS4. PS4 already has one of the best graphics API and it's similar to Mantle.

Hahahahaha...... Clueless fan boy alert..

My statements are inline with AMD Gaming Evolved with Mantle developer Rebellion. The clueless is you.

Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

To clarify, my "closing the gap" POV doesn't automatically means equal or exceed PS4. At best (with tiling tricks for memory related workaround), X1 runs like prototype 7850 with 12 CU/1.32 TFLOPS + 153.6 GB/s memory bandwidth and it's still inferior to retail 7850 and R7-265 (closest to PS4).

R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > retail 7850 (1.76 TFLOPS + 153.6 GB/s memory bandwidth) > prototype 7850 (1.32 TFLOPS + 153.6 GB/s memory bandwidth).

----

Rebellion is AMD Gaming Evolved Mantle developer. http://www.rebellion.co.uk/blog/2013/11/21/rebellion-throws-weight-behind-amd-mantle

Rebellion > you or any cows.

Hahahahaha.... once again deluded...

Tling will not solve anything, it caused such a ball ache with 360 that hardly anyone bothered, not to mention that you get a performance drop from assets being in the overlapped part of a tile.

Xbone will NEVER perform close to PS4..... Just the sheer FACT that PS4 has 50% more fill rate alone means Xbone will never catch it.

And Rebellion are respected dev's worth quoting now? WhyI? Because of what they said? lol

Define "close".

1. "Tiling tricks" will enable TMUs to fetch textures from the fastest memory pool i.e. treat 32MB ESRAM as texture cache.

2. "Tiling tricks" will enable ROPS to read and write on the fastest memory pool i.e. treat 32MB ESRAM read/write cache.

3. I have defined my "close" as prototype 7850 with 1.32 TFLOPS with 153.6 GB/s memory bandwidth against R7-265 with 1.89 TFLOPS with 179 GB/s memory bandwidth.

4. Both 7770 and 7790/R7-260X doesn't have the option to scale towards 153.6 GB/s memory bandwidth.

5. I have shown R9-290 at 1Ghz being reduce to near X1 levels with Tomb Raider 2013 and 69 GB/s video memory bandwidth. AMD's Catalyst driver 14.1 restores clock speed based down-clocking i.e. I can target any video memory bandwidth from 38.4 GB/s to 320 GB/s and run Mantle i.e. the "console like APIs".

6. Xbox 360's connection between the GPU and EDRAM is limited to 32 GB/s which is different for Xbox One i.e. 204 GB/s.

7. Xbox 360's GPU's ROPS can't directly access main GDDR3 memory.

8. Xbox 360's GPU doesn't have hardware AMD PRT/Tiled Resource.

9. Xbox 360's GPU has pixel format issues that limits certain effects and causes excess memory usage e.g. double HDR frame buffer due to lower HDR FP 10bit precision format. You don't understand the difference between Xbox 360 and Xbox One.

1. It still has 50% less TMU's then PS4 and even if it manages 100% use on them it'll STILL never be close to PS4

2. ROPs can read and write where ever they want, Xbone will STILL has a lot less fillrate then PS4 and because this Xbone will ALWAYS struggle with 1080p games cwhen compared to PS4

3. Stop using prototype crap as examples

4. No they don't but it's all dedicated and doesn't have to be shared with a CPU, unlike the consoles GPU's.

5. And?

6. Xbone has more pixels and shader units to feed, so it needs more

7. They didn't need too

8. Megatexture

9. Again, and?

Microsoft can release all the tweaks they want, it STILL will not change the FACT that PS4 has 50% more TMU performance, 50% more shader performance and 100% more Fillrate.

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

#127 Posted by scatteh316 (4980 posts) -

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@Guy_Brohski said:

@scatteh316 said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

#128 Edited by NEWMAHAY (3760 posts) -

@kuu2 said:

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

That article provided absolutely no new information and only briefly mentioned Xbox One.

Oh course DX12 has more features to show off. It is a newly updated API.

#129 Posted by GrenadeLauncher (5529 posts) -

@NEWMAHAY said:

@kuu2 said:

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

That article provided absolutely no new information and only briefly mentioned Xbox One.

Oh course DX12 has more features to show off. It is a newly updated API.

Leave kuu2 alone, he needs to cling to his delusions to maintain his grip on sanity.

#130 Edited by LanceX2 (305 posts) -

@GrenadeLauncher said:

@NEWMAHAY said:

@kuu2 said:

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

That article provided absolutely no new information and only briefly mentioned Xbox One.

Oh course DX12 has more features to show off. It is a newly updated API.

Leave kuu2 alone, he needs to cling to his delusions to maintain his grip on sanity.

still doesn't change the fact it WILL help the X1 but well wait and see how much

#131 Posted by Spitfire-Six (545 posts) -

Question for the tech guys,

Im an artist so when you talk about shaders I'm thinking about Maya, shaders, and ray trace materials are these the same things? If so game systems come with pre built shaders? I thought game engines determine which shaders are what. For instance when modeling for Arma you are limited to the shaders they use in P3D but we know pc's can handle many more. So how is it that the graphics card has built in shaders?

#132 Posted by hoosier7 (3881 posts) -

@spitfire-six said:

Question for the tech guys,

Im an artist so when you talk about shaders I'm thinking about Maya, shaders, and ray trace materials are these the same things? If so game systems come with pre built shaders? I thought game engines determine which shaders are what. For instance when modeling for Arma you are limited to the shaders they use in P3D but we know pc's can handle many more. So how is it that the graphics card has built in shaders?

I doubt you'll get a decent answer here my friend, give the B3yond forums a go, they're usually the go to for the in depth stuff.

#133 Posted by blackace (21081 posts) -

@kuu2 said:

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

We know they have a LOT to show at the E3. Phil Spencer already said they have to move some of their announcements outside of their 90min conference show. I'm expecting a lot of huge surprises. It won't be like last years E3, that's for sure. Phil said there will be less Execs on the stage and a lot more footage of games shown.

http://www.gamepur.com/news/14073-spencer-microsoft-has-lot-show-e3-2014-conference-already-over-90-min-time-.html

#134 Posted by cdragon_88 (1218 posts) -

LOL @ folks who are trying to explain how consoles won't benefit huge from DX12. You guys do realize many consolites have no idea what "low-level API" is correct? Why DX is actually needed on the PC.....

#135 Edited by GrenadeLauncher (5529 posts) -

@lancex2 said:

@GrenadeLauncher said:

@NEWMAHAY said:

@kuu2 said:

I'll just leave this here. Looks like The One still has some things to show.

http://www.techspot.com/news/56095-some-directx-12-features-will-require-next-gen-graphics-hardware.html

That article provided absolutely no new information and only briefly mentioned Xbox One.

Oh course DX12 has more features to show off. It is a newly updated API.

Leave kuu2 alone, he needs to cling to his delusions to maintain his grip on sanity.

still doesn't change the fact it WILL help the X1 but well wait and see how much

Oh, it will help. It just won't be as useful as certain people think it will be.

#136 Edited by Krelian-co (11677 posts) -

lems desperate for anything as long as they don't have to accept the sad fact that xbone is a lot weaker than ps4, BUT TEH CLOUD, BUT TEH ESRAM, BUT TEH TILED RESOURCES, BUT TEH HIDDEN CHIP, BUT TEH DIRECTX 12!!!!

not even a year out and xboners have already used every excuse in the table.

#137 Posted by Gargus (2147 posts) -

@FreedomFreeLife said:

@lostrib said:

isn't there already a DX12 thread

This is not DX12, its boosts and about Xbox One, not about PC

(good god the English, or lack thereof)

But you said DX12 in your first post over 12 times. So yeah, its obviously all about DX12.

And no, it may boost it on paper or in theory but I guarantee you right here and right now when it does come out no one with the naked eye on a xbox will be able to tell if it has dx12 or not because the hardware is pitifully under powered and was not created with dx12 in mind.

And yeah there are a bunch of dx12 threads already about the PC that turned into xbox one threads, and there are xbox one threads because all of the Microsoft fanboys have been jerking off dx12 on xbox one so hard that its raw and starting to bleed. A simple search would have shown you that.

#138 Edited by ronvalencia (15176 posts) -
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

#139 Posted by ronvalencia (15176 posts) -

@spitfire-six said:

Question for the tech guys,

Im an artist so when you talk about shaders I'm thinking about Maya, shaders, and ray trace materials are these the same things? If so game systems come with pre built shaders? I thought game engines determine which shaders are what. For instance when modeling for Arma you are limited to the shaders they use in P3D but we know pc's can handle many more. So how is it that the graphics card has built in shaders?

Modern GPUs has shader processors aka stream processors aka vector processors.

AMD GCN's stream processors are both integer and floating point units (FPU).

Stream processors runs shader programs.

#140 Posted by scatteh316 (4980 posts) -

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

#141 Edited by Kinthalis (5323 posts) -

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

Not sure what you're even trying to say here. There isnt much meaningful difference between say DX12 and console 3D programming. The feature set is fairly identical - though the methods used to end up wiht the same result are different. Optimization takes a different path on PC.

WIth DX12, openGL with extensions/Mantle (not the current implementaiton though) you have very similar CPU utilization on PC and console and almost the same GPU access, or what is effectively almost the same GPU feature set.

If you can optimize a game for console - you cna do very similar optimizations on PC and achieve the same results.

However, since most gaming PC's have CPU's 4 times as powerful as a console and many have GPU's 2 to 4 times as powerful as well - those optimizaitons will go a longer way.

Bottom line is - this generation, PC hardware will be fully taken advantage off, unlike last. That's going to mean PC will be the VASTLY superior choice for high resolution gaming, VR gaming and the best paltform for games featuring veyr complex simulations - specially lots of AI, lots of physics. Think strategy games, games with lots of complex destruction, open world games with lots of ambient AI.

Unfortunately I don't expect to see any of it until end of 2015 or so, but the consoles will never catch up after that.

#142 Edited by ronvalencia (15176 posts) -

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

Your a fool. The consoles doesn't have 3770K CPU. PS4's BF4 result shows R7-265 level performance.

#143 Posted by scatteh316 (4980 posts) -

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

Your a fool. The consoles doesn't have 3770K CPU. PS4's BF4 result shows R7-265 level performance.

I'm a fool...lmao... would care to point out where I said the consoles have a 3770k?

And a R7 265 can actually kick out more frame then PS4....

#144 Edited by bforrester420 (1711 posts) -

@RR360DD said:

This is why I chose the XOne. Its the console that'll keep on giving.

Whereas you've already got as good as your gonna get with the PS4. AA after AA, no AAA in sight LOL

Kind of like the 360, LMAO. Nowhere in Microsoft's history as a console producer is there any evidence that their consoles get better as the generation ages. Quite the contrary, in fact.

#145 Posted by RR360DD (12134 posts) -

@bforrester420 said:

@RR360DD said:

This is why I chose the XOne. Its the console that'll keep on giving.

Whereas you've already got as good as your gonna get with the PS4. AA after AA, no AAA in sight LOL

Kind of like the 360, LMAO. Nowhere in Microsoft's history as a console producer is there any evidence that their consoles get better as the generation ages. Quite the contrary, in fact.

LOL you're delusional. Xbox got better as it introduced Xbox Live through its life.

360 got better as the OS improved and so many more features were added to it. What a stupid comment to make.

#146 Edited by bforrester420 (1711 posts) -

@RR360DD said:

@bforrester420 said:

@RR360DD said:

This is why I chose the XOne. Its the console that'll keep on giving.

Whereas you've already got as good as your gonna get with the PS4. AA after AA, no AAA in sight LOL

Kind of like the 360, LMAO. Nowhere in Microsoft's history as a console producer is there any evidence that their consoles get better as the generation ages. Quite the contrary, in fact.

LOL you're delusional. Xbox got better as it introduced Xbox Live through its life.

360 got better as the OS improved and so many more features were added to it. What a stupid comment to make.

Oh, I'm sorry, I thought we used consoles to play games, not features....kind of like how the 360 received next to no exclusives for essentially the last 2 years of its life. Forgive my density.

#147 Edited by ronvalencia (15176 posts) -

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

Your a fool. The consoles doesn't have 3770K CPU. PS4's BF4 result shows R7-265 level performance.

I'm a fool...lmao... would care to point out where I said the consoles have a 3770k?

And a R7 265 can actually kick out more frame then PS4....

Minor difference. You missed the larger gain with Manlte when it's running on lesser CPU.

Back to the main point, PS4's has inferior BF4 result when compared to 7950.

Microsoft already did the math for raw ROPS memory bandwidth requirements (read/write) and 16 ROPS has saturated ~150 GB/s

#148 Edited by scatteh316 (4980 posts) -

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

1. TMUs are memory bound. You gain missed my Tomb Raider 2013 with 69 GB/s and R9-290 example.

2. Radeon HD 7950's 32 ROPS superior result says Hi. Any fill rates are bound by memory bandwidth. Adding another 16 ROPS ay 853Mhz on X1 would be nearly pointless.

If Radeon HD 7950 at 800Mhz is the top 32 ROPS ~= 240 GB/s example, the 16 ROP version would consume 120 GB/s. If normalize 120 GB/s at 800Mhz to 853Mhz it would yield 127.8 GB/s which is far higher than 7770/7790/R7-260X memory bandwidth limits.

Rebellion already explained why most X1's games are less than 1920x1080p i.e. ESRAM is too small for 1920x1080p and it needs "tiling tricks". Rebellion > you.

3. Stop posting crap posts. The prototype 7850 shows what it's 12 CU (768 stream processors and 48 TMUs) capabilities with higher memory bandwidth. Prototype 7850's 32 ROPS will be bound by memory bandwidth.

My POV is backed by Rebellion's POV.

Rebellion > you.

4. Consoles doesn't have PC's DirectX's sysmem1 (CPU) to sysmem2(Direct3D) to VRAM memory copy issues. Secondly, Xbox One's CPU (FCL type block) can directly communicate to the GPU (via host MMU).

5. An example on how R9-290 was reduce to near X1 Tomb Raider 2013 results by gimping memory bandwidth to 69 GB/s. Btw, my Mini-ITX box is limited by 16X PCI-E version 2 (8 GB/s-8 GB/s link) and dual channel DDR3-1333 memory i.e. it's Intel H67 chipset limitation. Your point 4 is a joke since 60 GB/s VRAM (i.e. 8 GB/s removed by 16X PCI-E version 2) didn't significantly change frame rate result.

For near X1 Tomb Raider 2013 result, my R9-290 has160 TMUs (from 40 CU x 4) and it was gimped by 69 GB/s memory bandwidth. Again, TMUs are memory bound. The same problem for my R9-290's 64 ROPS which is also gimped by 69 GB/s memory bandwidth.

6. You can't use Xbox 360's tiling example. Rebellion > you.

7. For Xbox One, it's an additional memory bandwidth for render targets.

8. ID's Megatexture doesn't work in Xbox 360's EDRAM, while AMD PRT/Tiled resource works with ESRAM. ID's Rage uses mega-texturing via GDDR3 and uses EDRAM just rendering targets.

9. Refer to point 8. You don't know the difference between Xbox 360 and Xbox One. Xbox One's GPU can happy do HDR 16 with FP filtering i.e. there's no need for Xbox 360's double frame buffer HDR workaround.

The existence of my 7950's 32 ROPS kills your PS4's 32 ROPS paper spec..

Fact: Rebellion > you.

Just stop... you can't compare you stupid experiments with PC hardware to the way console GPU's get utilized, get a god damn clue......

And PS4 32 ROPS >> HD 7950 ROPS in the real world as PS4 will always have a much much higher utilization.

Bull$hit. 7950 has better performance results than PS4. How's your 1600x900p BattleField 4?

7950 has AMD MANTLE i.e. the consoles doesn't have the API advantages over the PC.

7950+Mantle ~= PS4 scaled to 28 CU GCN with 240GB/s memory bandwidth..

You are an idiot....... 7950 might have better performance for now, we'll see how a 7950 stacks up in 12 months time.

And my Battlefiled 4 is just fine at 2560x1440 all Ultra settings with 8xAA.

Mantle is a good leap forward for PC but it's no where near as low level and optimized as a console API so don't compare the 2.

7950+Mantle only shows a few percent of an improvement on the GPU side of things, most of Mantles performance benefit comes from the CPU, but my 5Ghz 3770k doesn't struggle anyway.

Your a fool. The consoles doesn't have 3770K CPU. PS4's BF4 result shows R7-265 level performance.

I'm a fool...lmao... would care to point out where I said the consoles have a 3770k?

And a R7 265 can actually kick out more frame then PS4....

Minor difference. You missed the larger gain with Manlte when it's running on lesser CPU.

The difference can be quite large depending on the game... Mantle is 'Meh' at best...

#149 Posted by ronvalencia (15176 posts) -

@scatteh316 said:

@ronvalencia said:

Minor difference. You missed the larger gain with Manlte when it's running on lesser CPU.

The difference can be quite large depending on the game... Mantle is 'Meh' at best...

My "minor difference" is for R7-265 vs PS4's GCN on multi-platform game results. R7-265 *is* slightly faster in raw TFLOPS, ROPS, tessellation and memory bandwidth when compared to PS4.

Mantle is not a silver bullet for GPU bound workloads.