Will DirectX 12 Help the Xbox 1?

Avatar image for Phazevariance
#151 Posted by Phazevariance (12127 posts) -

Every xbox thread seems to turn into some thread headed by lead cow tormentos spewing specs and graphs all over the place to try and belittle xbox fans. Gamespot has lost its interest in monitoring anything too. Lamespot confirmed. Cowspot confirmed. Can't even talk about games without it begin a comparison to ps4 hardware these days. lame

That being said, dx12 likely will help improve graphics but unlikely to improve resolution.

Avatar image for Gaming-Planet
#152 Edited by Gaming-Planet (18617 posts) -

http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/

/thread

It's to benefit PC gamers.

Avatar image for ronvalencia
#153 Edited by ronvalencia (25410 posts) -

@Gaming-Planet said:

http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/

/thread

It's to benefit PC gamers.

DirectX 12's Mantle like features makes AMD's PC CPUs competitive against Intel regardless of GPU selection (i.e. AMD or NVIDIA). Larger gain for AMD CPU+NVIDIA GPU and AMD CPU+AMD GPU combo users.

I have Intel Core i7-4770K at 4.4Ghz which makes Mantle/DirectX 12 almost pointless (for current game engines).

Avatar image for Gaming-Planet
#154 Edited by Gaming-Planet (18617 posts) -
@ronvalencia said:

@Gaming-Planet said:

http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/

/thread

It's to benefit PC gamers.

DirectX 12's Mantle like features makes AMD's PC CPUs competitive against Intel regardless of GPU selection (i.e. AMD or NVIDIA). Larger gain for AMD CPU+NVIDIA GPU and AMD CPU+AMD GPU combo users.

I have Intel Core i7-4770K at 4.4Ghz which makes Mantle/DirectX 12 almost pointless (for current game engines).

Even with having a much higher IPC, Mantle is surely to make use of multi-threading a lot more efficient and more cores to be utilized efficiently as well.

Avatar image for FoxbatAlpha
#155 Edited by FoxbatAlpha (10669 posts) -

@Krelian-co said:

@FoxbatAlpha said:

So pretty much DX12 will push the "beta tested in the future" XBOX ONE further and also push the PS4 into being obsolete.

one has better performance and best multiplats, the other one has problems running source engine games at 720p, uses the microsoft hype word of the month to excuses xbone's terrible hardware, lem logic at its best

SMH. Krillin, when are you going to learn that you cannot force the pedals of a rose to open? The day in the sun will be upon us soon.

Avatar image for GrenadeLauncher
#156 Posted by GrenadeLauncher (6843 posts) -

How long until this conference anyway? I want to be here when the lems start calling a few more frames next year a knockout blow.

Avatar image for tormentos
#157 Posted by tormentos (26634 posts) -

@ronvalencia said:

The difference is AMD GCN is not Fermi vs Kepler GK110. AMD GCN 1.0 still includes ACE units to fill-in the holes during sync compute, hence it's not a big bullet point.

AMD Temash has 4 ACE units and it was mentioned as a side point.

R9-290X's 8 ACE units are backed by 44 CUs NOT by 18 CUs.

R9-290X is a near 4X scale of 7770.

7770 has 2 ACE units, 1 Rasterizer, 1 geometry, 16 ROPS, 10 CUs, 72 GB/s memory.

R9-290X has 8 ACE units, 4 Rasterizer, 4 geometry, 64 ROPS, 44 CUs, 320 GB/s memory.

AMD still allocated 2 ACE units per 11 CU block for R9-290X i.e. the ACE:CU ratio is still similar to 7770.

PS4's 8 ACE units could be a side effect for having two Temash modules i.e. 2X Temash modules = 8 CPU cores = 8 ACE units.

One Temash module = 4 CPU cores = 4 ACE units.

----------------

Raytracing can be done on X1's GCN.

“Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster.”

Boy that spinning will get you dizz...

You downplayed ACES on PS4 and you are now cheer leading for them on xbox one on a so call update to enable them..lol

Once again one of your arguments come back you hunt you.

Nothing on PS4 is a side effect,the PS4 has 2 rasterizers and 8 ACES which doesn't fit with your theory of 2 rasterizers per 11 block on the R9 290,by the way the 7970 doesn't have 8 Aces either it has just 2 just like any GCN,the only GPU with 8 ACES are the R9 series and the PS4 that i know off.

And now the PS4 has 2 Temash cores.? hahahaaaaaaaaaaaaaa...

Temash is 1.4ghz max with Turbo mode enable,the PS4 is 1.6ghz.

Without Turbo mode Temash is 1.0ghz..

The PS4 is anything will be based on Kabini which goes up to 2ghz,but that also would not explain it since the xbox one use the same Jaguar,and the xbox one doesn't have 8 Aces.

Oh and i didn't say the xbox one can't do raytracing on its GPU,i say it will slow it to a crawl which is 100% right,the xbox one can't pull other lesser effects in Tomb Raider at full resolution imagine ray Tracing which has a nasty hit on performance,so yeah it was talk about recently that it could be done by cloud.

The rendering technique would, however, be yet another computational effort for the Xbox One, but it's possible Microsoft could offload that process to itsvast network of cloud-powered servers.

http://www.gamespot.com/articles/xbox-one-could-get-photorealistic-rendering-system-for-amazing-visuals/1100-6418060/

So yeah like i say cloud based ray tracing...

On the xbox one hardware it would be to much period we all know it,and if you weren't such big MS suck up you would know it to,.

Avatar image for tormentos
#158 Edited by tormentos (26634 posts) -
@ronvalencia said:

Apparently, Xbox One has bricked it's ACE units and second render unit.

From http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."

5. Fill rate is a function for both ROPS and memory writes. The existence of 7950's 32 ROPS at 800Mhz with 240 GB/s > PS4's 32 ROPS at 800Mhz with 176 GB/s shows that PS4's 32 ROPS are not being fully used.

7. Against your POV. Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

Bolcato stated that, “It was clearly a bit more complicated to extract the maximum power from the Xbox One when you’re trying to do that. I think eSRAM is easy to use. The only problem is…Part of the problem is that it’s just a little bit too small to output 1080p within that size. It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM.

“It means you have to do it in chunks or using tricks, tiling it and so on. It’s a bit like the reverse of the PS3. PS3 was harder to program for than the Xbox 360. Now it seems like everything has reversed but it doesn’t mean it’s far less powerful – it’s just a pain in the ass to start with. We are on fine ground now but the first few months were hell.”

...

The PS4 is thus more of a gaming machine in its core focus. “Yeah, I mean that’s probably why, well at least on paper, it’s a bit more powerful. But I think the Xbox One is gonna catch up. But definitely there’s this eSRAM. PS4 has 8GB and it’s almost as fast as eSRAM [bandwidth wise] but at the same time you can go a little bit further with it, because you don’t have this slower memory. That’s also why you don’t have that many games running in 1080p, because you have to make it smaller, for what you can fit into the eSRAM with the Xbox One.”

One of the big difference between X1 vs Xbox 360 is connection bandwidth between embedded memory and GPU.

AMD is just interested in selling more GPU products..

Radeon HD 7950's 32 ROPS results > PS4 says Hi.

Doubling X1's 16 ROPS to 32 ROPS wouldn't be fully used with memory bandwidth around 150 GB/s.

If we use Radeon HD 7950 as the top 32 ROPS example for 3DMarks Vantage' fill rate score i..e

240 GB/s = 32 ROPS at 800Mhz and we divide it by 2, you get 120 GB/s = 16 ROPS at 800Mhz. X1's 16 ROPS is clock at 853Mhz = 127.95 GB/s. Here we establish that 16 ROPS go further than 7790/R7-260X.

7790's 16 ROPS at 1Ghz is gimped by 96 GB/s memory bandwidth, which place it lower than 7850. The fastest known 12 CU equipped GCN (1.32 TFLOPS) with 153 GB/s memory bandwidth is the prototype 7850 with 12 CU. Read http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196.html Note that prototype 7850 (with 12 CU) is slower than the retail 7850 (with 16 CU).

The big difference between prototype 7850 with 12 CU (1.32 TFLOPS) and Xbox One 1.32 TFLOPS) is hitting ~153 GB/s level memory bandwidth i.e. Xbox One needs "tiling tricks" for effective eSRAM usage.

After all the software tricks, PS4 is still faster than Xbox One i.e. the comparison is like R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > prototype 7850 with 12 CU (1.32 TFLOPS+ 153.6 GB/s memory bandwidth).

BttleField 4 and R7-265 vs 7850 review from http://www.guru3d.com/articles_pages/amd_radeon_r7_265_review,14.html

Remember,

1. prototype 7850 (1.32 TFLOPS + 153.6 GB/s) is slower than 7850 (1.76 TFLOPS + 153.6 GB/s).

2. 7850 (1.76 TFLOPS+ 153.6 GB/s) is slower than R7-265 (1.89 TFLOPS + 179 GB/s)

Radeon HD 7770 (1.28 TFLOPS)/7790/R7-260X doesn't have the software options to exceed their lower memory bandwidth limits i.e. missing ESRAM booster.

And you keep quoting now the same sh** you downplayed from sony for months..lol

So if you're looking to target 1920x1080 with that setup, then you're talking about (8 + 4 + 4 + 4 + 4 + 4) * 1920 * 1080 = 55.3MB. On top of that we supported 16 shadow-casting lights which required 16 1024x1024 shadow maps in an array, plus 4 2048x2048 cascades for a directional light. That gives you 64MB of shadow maps + another 64MB of cascade maps, which you'll want to be reading from at the same time you're reading from your G-Buffers. Obviously some of these numbers are pretty extreme (we were still prototyping) and you could certainly reduce that a lot, but I wanted to give an idea of the upper bound on what an engine might want to be putting in ESRAM for their main render pass. However even without the shadows it doesn't really bode well for fitting all of your G-Buffers in 32MB at 1080p. Which means either decreasing resolution, or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM. Any kind of MSAA at 1080p also seems like a no-go for fitting in ESRAM, even for forward rendering. Just having a RGBA16f target + D32 depth buffer at 2xMSAA requires around 47.5MB at 1920x1080.

http://forum.beyond3d.com/showpost.php?p=1829417&postcount=185

He quoted a developer in B3D.

This ^^^ is the real reason why ESRAM is a problem is not enough,and you keep quoting Bolcato but you ignore the biggest point he is making..

The only problem is…Part of the problem is that it’s just a little bit too small to output 1080p within that size. It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM.

Bolcato is telling you right to your face that ESRAM IS TO SMALL.

Which is something confirmed by the developer from Beyond3D in its explanation,that he had 64MB of shadows maps and 64MB of cascading maps,without (8 + 4 + 4 + 4 + 4 + 4) * 1920 * 1080 = 55.3MB of.

Lighting target: RGBA16f
Normals: RG16
Diffuse albedo + BRDF ID: RGBA8
Specular albedo + roughness: RGBA8
Tangents: RG16
Depth: D32

“It means you have to do it in chunks or using tricks, tiling it and so on.

In chunks doing tricks tiling and so on,which probably mean downgrades..

However even without the shadows it doesn't really bode well for fitting all of your G-Buffers in 32MB at 1080p. Which means either decreasing resolution, or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM. Any kind of MSAA at 1080p also seems like a no-go for fitting in ESRAM, even for forward rendering. Just having a RGBA16f target + D32 depth buffer at 2xMSAA requires around 47.5MB at 1920x1080.

What the developer at Beyond3D say.

Now stop reading just what you like and understand 32 MB of ESRAM is to small it has been say for months and you claimed other wise a developer is telling it to your face and you just read what you like,yeah doing tricks to fit everything on those 32MB mean cutting stuff lower effects or resolution or giving up frames.

Like Tomb Raider and basically all multiplatforms have shown.

Avatar image for acp_45
#159 Edited by ACP_45 (2647 posts) -

@tormentos: wow volumetric ray tracing on clouds..... Yes that would take a lot of processing....

Dude they tried to do real time raytracing by combining 2 PS3's a while back they could only generate an upscaled 480p 12-25fps.....

If raytracing could easily be done why does our graphics in our games look like absolute scrap compared to a scene generate by rt raytracing.?

Xbox one is more balanced just not in graphics..... Dude i get that the ps4 is stronger but that's why mine probably goes up to 167 degrees. Ps4 is more focused on things that Xbox One isn't.....

It's logical..... They have got kinect and a bigger OS.

Avatar image for tormentos
#160 Posted by tormentos (26634 posts) -

@acp_45 said:

@tormentos: wow volumetric ray tracing on clouds..... Yes that would take a lot of processing....

Dude they tried to do real time raytracing by combining 2 PS3's a while back they could only generate an upscaled 480p 12-25fps.....

If raytracing could easily be done why does our graphics in our games look like absolute scrap compared to a scene generate by rt raytracing.?

Xbox one is more balanced just not in graphics..... Dude i get that the ps4 is stronger but that's why mine probably goes up to 167 degrees. Ps4 is more focused on things that Xbox One isn't.....

It's logical..... They have got kinect and a bigger OS.

Loading Video...

This is the problem when you don't know what your talking about,it was 3 PS3 and they render at 720p with 4X MS,using nothing but Cell that is huge the,the RSX wasn't touch it was just using 3 CPU on those 3 units,and was doing 720p.

Avatar image for Solid_Max13
#161 Posted by Solid_Max13 (3588 posts) -

@tormentos: I don't know why people downplay you, everything you posted is legit and checks out, good on you dude!

Avatar image for acp_45
#162 Edited by ACP_45 (2647 posts) -

@tormentos: -_- do you really think that they tried ray tracing on ps3 just once. I do know what I'm talking about.

http://techreport.com/news/25943/imagination-technologies-real-time-ray-tracing-feasible-in-several-years

Avatar image for ronvalencia
#163 Edited by ronvalencia (25410 posts) -

@tormentos said:
@ronvalencia said:

Apparently, Xbox One has bricked it's ACE units and second render unit.

From http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"In addition to asynchronous compute queues, the Xbox One hardware supports two concurrent render pipes," Goossen pointed out. "The two render pipes can allow the hardware to render title content at high priority while concurrently rendering system content at low priority. The GPU hardware scheduler is designed to maximise throughput and automatically fills 'holes' in the high-priority processing. This can allow the system rendering to make use of the ROPs for fill, for example, while the title is simultaneously doing synchronous compute operations on the compute units."

5. Fill rate is a function for both ROPS and memory writes. The existence of 7950's 32 ROPS at 800Mhz with 240 GB/s > PS4's 32 ROPS at 800Mhz with 176 GB/s shows that PS4's 32 ROPS are not being fully used.

7. Against your POV. Read http://gamingbolt.com/xbox-ones-esram-too-small-to-output-games-at-1080p-but-will-catch-up-to-ps4-rebellion-games

Bolcato stated that, “It was clearly a bit more complicated to extract the maximum power from the Xbox One when you’re trying to do that. I think eSRAM is easy to use. The only problem is…Part of the problem is that it’s just a little bit too small to output 1080p within that size. It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM.

“It means you have to do it in chunks or using tricks, tiling it and so on. It’s a bit like the reverse of the PS3. PS3 was harder to program for than the Xbox 360. Now it seems like everything has reversed but it doesn’t mean it’s far less powerful – it’s just a pain in the ass to start with. We are on fine ground now but the first few months were hell.”

...

The PS4 is thus more of a gaming machine in its core focus. “Yeah, I mean that’s probably why, well at least on paper, it’s a bit more powerful. But I think the Xbox One is gonna catch up. But definitely there’s this eSRAM. PS4 has 8GB and it’s almost as fast as eSRAM [bandwidth wise] but at the same time you can go a little bit further with it, because you don’t have this slower memory. That’s also why you don’t have that many games running in 1080p, because you have to make it smaller, for what you can fit into the eSRAM with the Xbox One.”

One of the big difference between X1 vs Xbox 360 is connection bandwidth between embedded memory and GPU.

AMD is just interested in selling more GPU products..

Radeon HD 7950's 32 ROPS results > PS4 says Hi.

Doubling X1's 16 ROPS to 32 ROPS wouldn't be fully used with memory bandwidth around 150 GB/s.

If we use Radeon HD 7950 as the top 32 ROPS example for 3DMarks Vantage' fill rate score i..e

240 GB/s = 32 ROPS at 800Mhz and we divide it by 2, you get 120 GB/s = 16 ROPS at 800Mhz. X1's 16 ROPS is clock at 853Mhz = 127.95 GB/s. Here we establish that 16 ROPS go further than 7790/R7-260X.

7790's 16 ROPS at 1Ghz is gimped by 96 GB/s memory bandwidth, which place it lower than 7850. The fastest known 12 CU equipped GCN (1.32 TFLOPS) with 153 GB/s memory bandwidth is the prototype 7850 with 12 CU. Read http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196.html Note that prototype 7850 (with 12 CU) is slower than the retail 7850 (with 16 CU).

The big difference between prototype 7850 with 12 CU (1.32 TFLOPS) and Xbox One 1.32 TFLOPS) is hitting ~153 GB/s level memory bandwidth i.e. Xbox One needs "tiling tricks" for effective eSRAM usage.

After all the software tricks, PS4 is still faster than Xbox One i.e. the comparison is like R7-265 (1.89 TFLOPS + 179 GB/s memory bandwidth) > prototype 7850 with 12 CU (1.32 TFLOPS+ 153.6 GB/s memory bandwidth).

BttleField 4 and R7-265 vs 7850 review from http://www.guru3d.com/articles_pages/amd_radeon_r7_265_review,14.html

Remember,

1. prototype 7850 (1.32 TFLOPS + 153.6 GB/s) is slower than 7850 (1.76 TFLOPS + 153.6 GB/s).

2. 7850 (1.76 TFLOPS+ 153.6 GB/s) is slower than R7-265 (1.89 TFLOPS + 179 GB/s)

Radeon HD 7770 (1.28 TFLOPS)/7790/R7-260X doesn't have the software options to exceed their lower memory bandwidth limits i.e. missing ESRAM booster.

And you keep quoting now the same sh** you downplayed from sony for months..lol

So if you're looking to target 1920x1080 with that setup, then you're talking about (8 + 4 + 4 + 4 + 4 + 4) * 1920 * 1080 = 55.3MB. On top of that we supported 16 shadow-casting lights which required 16 1024x1024 shadow maps in an array, plus 4 2048x2048 cascades for a directional light. That gives you 64MB of shadow maps + another 64MB of cascade maps, which you'll want to be reading from at the same time you're reading from your G-Buffers. Obviously some of these numbers are pretty extreme (we were still prototyping) and you could certainly reduce that a lot, but I wanted to give an idea of the upper bound on what an engine might want to be putting in ESRAM for their main render pass. However even without the shadows it doesn't really bode well for fitting all of your G-Buffers in 32MB at 1080p. Which means either decreasing resolution, or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM. Any kind of MSAA at 1080p also seems like a no-go for fitting in ESRAM, even for forward rendering. Just having a RGBA16f target + D32 depth buffer at 2xMSAA requires around 47.5MB at 1920x1080.

http://forum.beyond3d.com/showpost.php?p=1829417&postcount=185

He quoted a developer in B3D.

This ^^^ is the real reason why ESRAM is a problem is not enough,and you keep quoting Bolcato but you ignore the biggest point he is making..

The only problem is…Part of the problem is that it’s just a little bit too small to output 1080p within that size. It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM.

Bolcato is telling you right to your face that ESRAM IS TO SMALL.

Which is something confirmed by the developer from Beyond3D in its explanation,that he had 64MB of shadows maps and 64MB of cascading maps,without (8 + 4 + 4 + 4 + 4 + 4) * 1920 * 1080 = 55.3MB of.

Lighting target: RGBA16f

Normals: RG16

Diffuse albedo + BRDF ID: RGBA8

Specular albedo + roughness: RGBA8

Tangents: RG16

Depth: D32

“It means you have to do it in chunks or using tricks, tiling it and so on.

In chunks doing tricks tiling and so on,which probably mean downgrades..

However even without the shadows it doesn't really bode well for fitting all of your G-Buffers in 32MB at 1080p. Which means either decreasing resolution, or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM. Any kind of MSAA at 1080p also seems like a no-go for fitting in ESRAM, even for forward rendering. Just having a RGBA16f target + D32 depth buffer at 2xMSAA requires around 47.5MB at 1920x1080.

What the developer at Beyond3D say.

Now stop reading just what you like and understand 32 MB of ESRAM is to small it has been say for months and you claimed other wise a developer is telling it to your face and you just read what you like,yeah doing tricks to fit everything on those 32MB mean cutting stuff lower effects or resolution or giving up frames.

Like Tomb Raider and basically all multiplatforms have shown.

1. I'm not a Sony a$$licker and I'm NOT bound by their reality distortion field, even if I have bought their HDTV and laptop in the past.

I'm treating PS4 as any other AMD based box e.g. I don't make a big deal with R9-290X's 8 ACE units being superior over lesser GCNs.

Haven't you notice by now the rarity of AMD GPU1 vs AMD GPU2 wars in the PC forums?

2. If you overclock AMD Temash you basically get AMD Kabini. AMD Opteron X2100 version reach 1.9 Ghz clock speed. The code names are inconsequential when design changes are very minor.

3. Where did you get "In chunks doing tricks tiling and so on,which probably mean downgrades.."? Didn't you know AMD PRT (texture tiling) doesn't equal downgrades?

4. Your Beyond3D post's render target example is attempting to do the traditional render targets without render target tiling.

Your NOT reading your Beyond3D postings i.e. "or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM"

There are two main target approaches for X1's 32 MB ESRAM.

1. "decreasing resolution."

2. "making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM".

Again, your NOT reading your Beyond3D postings. There's nothing in your Beyond3D post contradicts Rebellion's POV i.e. 32 MB ESRAM is too small for 1920x1080p, hence "tiling tricks" to bring it closer to PS4.

Rebellion's "tiling tricks" = your Beyond3D post's option 2.

Avatar image for ronvalencia
#164 Edited by ronvalencia (25410 posts) -

@tormentos said:

@acp_45 said:

@tormentos: wow volumetric ray tracing on clouds..... Yes that would take a lot of processing....

Dude they tried to do real time raytracing by combining 2 PS3's a while back they could only generate an upscaled 480p 12-25fps.....

If raytracing could easily be done why does our graphics in our games look like absolute scrap compared to a scene generate by rt raytracing.?

Xbox one is more balanced just not in graphics..... Dude i get that the ps4 is stronger but that's why mine probably goes up to 167 degrees. Ps4 is more focused on things that Xbox One isn't.....

It's logical..... They have got kinect and a bigger OS.

Loading Video...

This is the problem when you don't know what your talking about,it was 3 PS3 and they render at 720p with 4X MS,using nothing but Cell that is huge the,the RSX wasn't touch it was just using 3 CPU on those 3 units,and was doing 720p.

LOL,

From http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html AMD's POV on ray tracing issue.

Loading Video...

The secret sauce is in the software.

Avatar image for Guy_Brohski
#166 Posted by Guy_Brohski (1820 posts) -

@SuddenlyTragic said:

Well I don't know enough about DX12 but if its like every other Direct X ever released, then it would have a negative impact on performance I would imagine. For example, playing a PC game with DX11 features enabled will lead to a decrease in performance; turning them off yields better performance. It's been like this for every generation of Direct X releases. Unless DX12 is completely different and allows for ease of programming and optimization similar to AMD's Mantle then that's another story. But as I said, don't know much of anything about 12, but based on all past Direct X's, I don't see how this could possibly help the Xbox One achieve 1080p and better performance.

I fully disagree, as the amount you're willing to pay on a more capable GPU is really the deciding factor in how well the newest DX will perform.

Avatar image for tormentos
#167 Edited by tormentos (26634 posts) -

@ronvalencia said:

1. I'm not a Sony a$$licker and I'm NOT bound by their reality distortion field, even if I have bought their HDTV and laptop in the past.

I'm treating PS4 as any other AMD based box e.g. I don't make a big deal with R9-290X's 8 ACE units being superior over lesser GCNs.

Haven't you notice by now the rarity of AMD GPU1 vs AMD GPU2 wars in the PC forums?

2. If you overclock AMD Temash you basically get AMD Kabini. AMD Opteron X2100 version reach 1.9 Ghz clock speed. The code names are inconsequential when design changes are very minor.

3. Where did you get "In chunks doing tricks tiling and so on,which probably mean downgrades.."? Didn't you know AMD PRT (texture tiling) doesn't equal downgrades?

4. Your Beyond3D post's render target example is attempting to do the traditional render targets without render target tiling.

Your NOT reading your Beyond3D postings i.e. "or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM"

There are two main target target approaches for X1's 32 MB ESRAM.

1. "decreasing resolution."

2. "making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM".

Your NOT reading your Beyond3D postings. There's nothing in your Beyond3D post contradicts Rebellion's POV i.e. 32 MB ESRAM is too small for 1920x1080p, hence "tiling tricks" to bring it closer to PS4.

Rebellion's "tiling tricks" = your Beyond3D post's option 2.

No you are a MS a$$ liker.

1-You are hyping now for xbox one what for months you have been downplaying on PS4,Aces and compute..

2-The PS4 is not temash there is never over clocked hardware inside consoles,all the contrary the GPU inside the xbox one and PS4 is under clocked from its original parts,even less over clocked by 200mhz,is not Temash is Kabini.

3-PRT is just a damn way to now have all texture completely resident in the memory at the same,it doesn't help with MSAA and other effects that need to be on the frame buffer,PRT will help with textures nothing more.

However even without the shadows it doesn't really bode well for fitting all of your G-Buffers in 32MB at 1080p. Which means either decreasing resolution, or making some tough choices about which render targets (or which portions of render targets, if using tiled rendering) should live in ESRAM. Any kind of MSAA at 1080p also seems like a no-go for fitting in ESRAM,

So yeah stated by a developer.

4-Oh please stop damage controlling Bolcato told you to your freaking face ESRAM is TO SMALL.

The only problem is…Part of the problem is that it’s just a little bit too small to output 1080p within that size. It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM.

IS TO SMALL DEAL WITH IT.

Hahahaa

5-See you can't read i am not saying that the poster on Beyond3d counter Rebellions,all the contrary both agree that ESRAM is to small..hhahaahaaa

You have a serious problem grasping the English language dude,i suggest you buy Roseta stone...lol

@acp_45 said:

@tormentos: -_- do you really think that they tried ray tracing on ps3 just once. I do know what I'm talking about.

http://techreport.com/news/25943/imagination-technologies-real-time-ray-tracing-feasible-in-several-years

Ray tracing was done on Cell no GPU was touch in fact GPU support on linux on PS3 was always off,there was no GPU rendering,so yeah using it was possible and is not quite the sames as rendering a complete scene using Cell alone.

@Solid_Max13 said:

@tormentos: I don't know why people downplay you, everything you posted is legit and checks out, good on you dude!

Because when you post something that prove some one wrong it make the mad,so they start attacking you,i make a thread here and people don't even look at what i post they just get in the thread with huge damage control and attack me..hahaha

Avatar image for ronvalencia
#168 Edited by ronvalencia (25410 posts) -

@Solid_Max13 said:

@tormentos: I don't know why people downplay you, everything you posted is legit and checks out, good on you dude!

I guess you accept tormentos' memory bandwidth math stupidity.

From http://au.gamespot.com/forums/topic/29451500/xbox-one--7790-confirmed-by-xbox-one-architec.

Avatar image for tormentos
#169 Posted by tormentos (26634 posts) -
@ronvalencia said:

LOL,

From http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html AMD's POV on ray tracing issue.

Loading Video...

The secret sauce is in the software.

Every time i post something about Cell and you post a GPU benchmark or video i just chuckle...hahahaha

Dude you are quoting a damn GPU 3 years older than Cell...hahahahaaa

Now find me an AMD CPU that was doing ray tracing like Cell on 2006,and then you have a point,hell the demo i posted was been controlled in real time,that sh** you posted was a crappy video and we all know the 4800GPU change nothing Nvidia in those says still kicked the living crap out of AMD is sad..lol

Avatar image for tormentos
#170 Posted by tormentos (26634 posts) -

@ronvalencia said:

I guess you accept tormentos' memory bandwidth math stupidity.

From http://au.gamespot.com/forums/topic/29451500/xbox-one--7790-confirmed-by-xbox-one-architec.

Should i post your stupidity,? Like calling the CPU inside the Ps4 Temash which you just did in your endless quest to downplay the PS4.?

Temash in turbo mode is 1.4ghz,is not inside the PS4 period you have no proof what so ever but your biased ass opinion,just like you claim the xbox one has a 7850 with 12 CU,because it serve you stupid argument best,when the GPU is damn know to have a 7790 with 2 CU off and lower clock,then you trying to imply that it is the same when structure wise they are different Pitcairn is bigger.

But hey how about know which you refuse to admit ESRAM is to small even when the damn developer you quote say so.?

Prove that the CPU in the PS4 is Temash,because i already prove the speed doesn't match and we all know over clocked hardware inside consoles is a big no no..

But even if it was Temash or Kabini that would still not explain why the PS4 has 8 ACES,because you claim Kabini is the same with some minor changes,the xbox one uses Kabini as well and doesn't have 8 ACES so your theory is wrong period the PS4 and xbox one use the same CPU period as is not because of the CPU that the PS4 has 8 Aces and is not a side effect it is something Sony has been hyping since before the xbox one was even unveil which you have been downplaying.

Avatar image for ronvalencia
#171 Edited by ronvalencia (25410 posts) -
@tormentos said:

@ronvalencia said:

I guess you accept tormentos' memory bandwidth math stupidity.

From http://au.gamespot.com/forums/topic/29451500/xbox-one--7790-confirmed-by-xbox-one-architec.

Should i post your stupidity,? Like calling the CPU inside the Ps4 Temash which you just did in your endless quest to downplay the PS4.?

Temash in turbo mode is 1.4ghz,is not inside the PS4 period you have no proof what so ever but your biased ass opinion,just like you claim the xbox one has a 7850 with 12 CU,because it serve you stupid argument best,when the GPU is damn know to have a 7790 with 2 CU off and lower clock,then you trying to imply that it is the same when structure wise they are different Pitcairn is bigger.

But hey how about know which you refuse to admit ESRAM is to small even when the damn developer you quote say so.?

Prove that the CPU in the PS4 is Temash,because i already prove the speed doesn't match and we all know over clocked hardware inside consoles is a big no no..

But even if it was Temash or Kabini that would still not explain why the PS4 has 8 ACES,because you claim Kabini is the same with some minor changes,the xbox one uses Kabini as well and doesn't have 8 ACES so your theory is wrong period the PS4 and xbox one use the same CPU period as is not because of the CPU that the PS4 has 8 Aces and is not a side effect it is something Sony has been hyping since before the xbox one was even unveil which you have been downplaying.

1. AMD Temash SoC sports AMD Jaguar CPU just like PS4's Liverpool SoC.

2. Die shots for PS4 and Temash shows AMD Jaguar CPU being nearly identical.

3. PS4's Liverpool main IP blocks

  • Two Temash/Kabini/Kyoto SoCs which yields 8 ACE units.
  • Pitcairn Island GCN i.e. scale the low CU count to 20 CUs.

There's a minor modification to L2 cache (SRAM) which maximise it's smaller SRAM pool usage for GpGPU. Pitcairn Island GCN has 512 KB of L2 cache (SRAM) and 1152 KB local data store (LDS, SRAM) from 18 CUs.

7970 has 2048 KB local data store (LDS, SRAM) from 32 CUs and 768 KB L2 cache.

R9-290X has 2816 KB local data store (LDS, SRAM) from 32 CUs and 1024 KB L2 cache.

I'm not even factoring 256 KB register storage (yet another SRAM stroage) per CU. ~11 Megabytes of ultra-fast SRAM register storage for R9-290X. ~4.6 Megabytes of ultra-fast SRAM register storage for PS4's 18 CU GCN. The math, 64KB X 4 lines = 256 KB vector register storage per CU.

If you do the math, larger scale GCNs simply has more very fast SRAM storage, hence why PS4's L2 cache changes doesn't impress me i.e. it's a good modification at a given budget, but it's no match against larger scale GCNs.

------------------

I have supported Rebellion's POV i.e. to maximise X1's ROPS and TMUs with 32 MB ESRAM usage (i.e. near to prototype 7850 levels), "tiling tricks" is a must.

"Tiling tricks" = memory related workaround for small ESRAM storage.

I even quoted Intel's POV on why they added 128 MB eDRAM for Intel HD 5200 IGP i.e. for future workloads. If you have a brain, the conclusion is 32 MB of fast video memory is NOT enough for non-tiling workloads.

Note why MS is pushing tiling based solutions i.e. they know 32 MB ESRAM is not enough to maximise X1's GCN solution (with traditional PC style graphics workloads).

PS4 was designed to handle traditional PC style graphics workloads i.e. tiling is just an option NOT a must.

From my 7950/7970 then to R9-290/R9-290X based POV, why would I (and other Hermits with similar level GPU) be impressed with PS4?

I have the potential to CrossFire two Hawaii GCNs in one box which would absolutely murder PS4 i.e. +10 TFLOPS vs 1.84 TFLOPS.

GS's System Wars are not limited consoles. Get your head out of Lem vs Cows.

Avatar image for always_explicit
#172 Posted by always_explicit (3379 posts) -

Mods can we please have a sticky for @tormentos and @ronvalencia to battle it out. The technobabble and messy quote chains are just too much for me to bare. Also....I dont understand anything they are talking about.

Avatar image for ronvalencia
#173 Edited by ronvalencia (25410 posts) -

@@

@tormentos said:
@ronvalencia said:

LOL,

From http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html AMD's POV on ray tracing issue.

Loading Video...

The secret sauce is in the software.

Every time i post something about Cell and you post a GPU benchmark or video i just chuckle...hahahaha

Dude you are quoting a damn GPU 3 years older than Cell...hahahahaaa

Now find me an AMD CPU that was doing ray tracing like Cell on 2006,and then you have a point,hell the demo i posted was been controlled in real time,that sh** you posted was a crappy video and we all know the 4800GPU change nothing Nvidia in those says still kicked the living crap out of AMD is sad..lol

Who cares when AMD's 2011 era Rory Read management has given the boot on several key VLIW Radeon HD engineers.

NVIDIA GT200's die size 576 mm^2.

AMD RV770's die size 260 mm^2. One should expect more from NVIDIA GT200.

Key VLIW engineers for R600 (with die size 420 mm^2) was given the boot. R600's broken MSAA hardware was LOL and I was GeForce 8600 GTS/8600M GT owner at this time period i.e. Intel Core 2 Duo+GeForce 8600 GTS combo. Switched to Radeon HD when NVIDIA has their RROD type episode during H2 2008.

Note that I was very close in purchasing GeForce GTX 780 instead of R9-290X i.e. I'm not going buy another feature level 11_0 GPU.

Under Rory Read's management, Hawaii's die size is 438 mm^2 which is larger than R600's 420 mm^2. NVIDIA's GK110 has 551 mm^2 die size i.e. NVIDIA's flagship GPU game plan haven't changed since GT200.

Avatar image for Solid_Max13
#174 Posted by Solid_Max13 (3588 posts) -

@always_explicit said:

Mods can we please have a sticky for @tormentos and @ronvalencia to battle it out. The technobabble and messy quote chains are just too much for me to bare. Also....I dont understand anything they are talking about.

Lol I love it, but I'm a huge tech nerd also so it's interesting, but expect this thread to get crazy after the unveiling today at GDC for DX 12 we shall finally see!

Avatar image for always_explicit
#175 Posted by always_explicit (3379 posts) -

@Solid_Max13 said:

@always_explicit said:

Mods can we please have a sticky for @tormentos and @ronvalencia to battle it out. The technobabble and messy quote chains are just too much for me to bare. Also....I dont understand anything they are talking about.

Lol I love it, but I'm a huge tech nerd also so it's interesting, but expect this thread to get crazy after the unveiling today at GDC for DX 12 we shall finally see!

I might love it if I understood it, they may as well be talking Japanese for all it means to me. As someone who understands it all can you educate me as to which one is actually talking sense. Im assuming one of them is "right" and one of them is "wrong"....or is there a shitty grey area that creates this never ending battle...?

Avatar image for rob9999991
#176 Edited by Rob9999991 (92 posts) -

@StrongBlackVine:

little fanboy... The grown ups are talking...finish your apple juice and go to bed.

Avatar image for StrongBlackVine
#177 Edited by StrongBlackVine (13262 posts) -

@rob9999991 said:

@StrongBlackVine:

little fanboy... The grown ups are talking...finish your apple juice and go to bed.

You are on System Wars idiot and what I said was correct.

Avatar image for acp_45
#178 Posted by ACP_45 (2647 posts) -

@ronvalencia: I think Tormentos is denying the future of software.

One day hardware will become less of a problem.

Avatar image for acp_45
#179 Edited by ACP_45 (2647 posts) -

@tormentos: If you had 3 ps3's to do rt ray tracing.... With 7 SPE's in use and focusing just on ray tracing which is possible. The real problem with having a game that uses raytracing is that it isn't going to be the only thing you are going to throw resources at...... And Ray tracing will take up most of the computing and little space will be left. That's where software comes in.......

Avatar image for Gargus
#180 Posted by Gargus (2147 posts) -

DX12 wont even release till fall/winter of 2015. So IF it does help that help will be a long ways off.

And no I don't think it will help much, it might give it a little boost but to be honest it will so small that most people wont be able to tell the difference.

Besides it isn't the graphics people don't like about the xb1 really. Its the shitty game library, the forced Kinect purchase, the way its being billed as a media box, the high price, the need to pay more for xblg over ps+ and get a whole lot less if you want to do ANYTHING online at all, and so on. Its microsofts bloated and closed minded thinking that is hindering the machine. That's why people like me who used to buy all the consoles now are getting a ps4 only, or just in general are leaning to ps4.

Avatar image for kuu2
#181 Posted by kuu2 (10231 posts) -

The short answer is a resounding yes, yes DX 12 will improve The One. Built for the future confirmed.

Avatar image for Krelian-co
#182 Posted by Krelian-co (13274 posts) -

@kuu2 said:

The short answer is a resounding yes, yes DX 12 will improve The One. Built for the future confirmed.

yeah using hardware that can hardly run games at their bare minimum, built for the future indeed!

Avatar image for kuu2
#183 Posted by kuu2 (10231 posts) -

@Krelian-co: And yet Sony doesn't have a single first party game running at 1080p 60fps while MSoft does. The architecture is not off the shelf parts so the curve is steeper and now we are seeing this gap close only 5 months into the gen from older engine games.

Avatar image for GrenadeLauncher
#184 Posted by GrenadeLauncher (6843 posts) -

So the question got answered: not really until 2016, when it might get easier for devs to port from PC to Xbone.

That's a real gamechanger guys.

Avatar image for Krelian-co
#185 Posted by Krelian-co (13274 posts) -

@kuu2 said:

@Krelian-co: And yet Sony doesn't have a single first party game running at 1080p 60fps while MSoft does. The architecture is not off the shelf parts so the curve is steeper and now we are seeing this gap close only 5 months into the gen from older engine games.

a corridor racer with no real time weather or lighting, COLOR ME SHOCKED!

and what do you mean by "closing the gap"? every single multiplat has been way better on ps4 with the exception of thief which was a mess on both.

Avatar image for ronvalencia
#186 Edited by ronvalencia (25410 posts) -

@Gaming-Planet said:

http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/

/thread

It's to benefit PC gamers.

GDC 2014, DirectX 12 and NVIDIA Geforce Titan Black results

Scoring less than 46.3 fps at 1920x1080p on Forza 5.

Avatar image for Solid_Max13
#187 Posted by Solid_Max13 (3588 posts) -

@kuu2 said:

@Krelian-co: And yet Sony doesn't have a single first party game running at 1080p 60fps while MSoft does. The architecture is not off the shelf parts so the curve is steeper and now we are seeing this gap close only 5 months into the gen from older engine games.

Well 1080p fand 60fps isn;t hard when the audience looks like this lol!

Avatar image for ReadingRainbow4
#188 Posted by ReadingRainbow4 (18733 posts) -

@Solid_Max13 said:

@kuu2 said:

@Krelian-co: And yet Sony doesn't have a single first party game running at 1080p 60fps while MSoft does. The architecture is not off the shelf parts so the curve is steeper and now we are seeing this gap close only 5 months into the gen from older engine games.

Well 1080p fand 60fps isn;t hard when the audience looks like this lol!

lmfao the flag girl is 3d but holy crap does that model look early ps2.

Avatar image for Solid_Max13
#189 Posted by Solid_Max13 (3588 posts) -

@ReadingRainbow4 said:

@Solid_Max13 said:

@kuu2 said:

@Krelian-co: And yet Sony doesn't have a single first party game running at 1080p 60fps while MSoft does. The architecture is not off the shelf parts so the curve is steeper and now we are seeing this gap close only 5 months into the gen from older engine games.

Well 1080p fand 60fps isn;t hard when the audience looks like this lol!

lmfao the flag girl is 3d but holy crap does that model look early ps2.

Loading Video...

This is actually what Forza 5 spectators look like

Avatar image for kuu2
#191 Posted by kuu2 (10231 posts) -

@Solid_Max13 said:

@ReadingRainbow4 said:

lmfao the flag girl is 3d but holy crap does that model look early ps2.

Loading Video...

This is actually what Forza 5 spectators look like

Yet DelayClub is still not here and my point stands. Thanks for confirming.

Avatar image for ReadingRainbow4
#194 Posted by ReadingRainbow4 (18733 posts) -

Stay classy lems.

Avatar image for ronvalencia
#195 Edited by ronvalencia (25410 posts) -
@acp_45 said:

@ronvalencia: I think Tormentos is denying the future of software.

One day hardware will become less of a problem.

From https://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx

"Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture" - AMD.

AMD's PR has claimed "FULL DirectX 12 compatibility" for their current GCNs. NVIDIA has yet to claim "FULL DirectX 12 compatibility".

Avatar image for rob9999991
#196 Posted by Rob9999991 (92 posts) -

@StrongBlackVine:

...shhhhhhhhhhhhhhhhhh

Avatar image for scatteh316
#197 Posted by scatteh316 (7003 posts) -

This thread is full of fail and full of idiots who have no grasp of hardware and software.

Avatar image for ronvalencia
#198 Posted by ronvalencia (25410 posts) -

@scatteh316 said:

This thread is full of fail and full of idiots who have no grasp of hardware and software.

Apply it to yourself.

Avatar image for scatteh316
#199 Posted by scatteh316 (7003 posts) -

@ronvalencia said:

@scatteh316 said:

This thread is full of fail and full of idiots who have no grasp of hardware and software.

Apply it to yourself.

I'm talking about you.... 90% of the fail in this thread is from you.... You are literally the most clueless person I know when it comes to hardware and software.

Avatar image for Crypt_mx
#200 Posted by Crypt_mx (4739 posts) -

@killatwill15 said:

direct x 11.2 didn't help like most lemmings said it would,

and now its 12?

just let it go lemmings,

pretty soon it will be 13, and then 14.

and nothing will help it because its shitty hardware

This is a very outdated style of thinking, when tech is advancing so quickly how can you predict that nothing will help shitty hardware? Apple always gets the very most out of their iPhone, which usually benchmarks the highest but has the lowest specs of a phone in its class.

Have you ever heard of optimization? Like when a game is in alpha and runs poorly and then they optimize it to bring up performance? What happens if they continue to optimize it? What if they were given better tools to optimize it with? That's directX12