DirectX 12 Boosts Xbox One CPU Performance by 50%, GPU by 20% -leak

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#251  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@tormentos said:
@StormyJoe said:

What i claim there has never being in question you fool,what has being is streaming power to the xbox one because those kind of process require bandwidth which far exceed online connections....

Streaming compute power for every Xbox with SP or MP individually sure. However if its only on the MP servers and its a global effect with like physics, interactions where everyone sees, the server only has to let the consoles what is happening and the correct animations and effects will be rendered. There is no real increase in bandwidth for telling/rendering the what and where, when the server is doing most of the calculations and sending the results....

Avatar image for Gue1
Gue1

12171

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#252  Edited By Gue1
Member since 2004 • 12171 Posts

This is really nice article to read for those that still believe in the secret sauce.

http://wccftech.com/xbox-one-directx-12-asynchronous-compute-hardware-specifications-ps4-comparison/

-

tl;dr version:

DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API.

-

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#253  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@Gue1:

Because is has lower overhead and has draw bundles,however DX12 will remove the single thread based deferred communicating to the gpu that is still in DX 11.x API in X1

Main problem facing both consoles is the weak jaguar cpu, and both need to maximize cpu usage with more efficient ways to save resources and pool those savings into areas where the system is waiting for the data from the cpu. This includes the gpu's idling waiting"bottleneck" from cpu.

Avatar image for worknow222
worknow222

1816

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#254 worknow222
Member since 2007 • 1816 Posts

Why the hell are people fighting over this? can we not wait and see? if it makes a difference however big or small so be it but to fight over who's right is pathetic.

Avatar image for Chutebox
Chutebox

50699

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#255 Chutebox
Member since 2007 • 50699 Posts

@Gue1: Lems are going to be believing in this shit even after this gen is done.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#256 tormentos
Member since 2003 • 33784 Posts

@Gue1 said:

This is really nice article to read for those that still believe in the secret sauce.

http://wccftech.com/xbox-one-directx-12-asynchronous-compute-hardware-specifications-ps4-comparison/

-

tl;dr version:

DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API.

-

This is what i have tell lemming all alone,DX12 is inside the xbox one already draw call have always been better on consoles than PC stated by AMD itself before this gen even started and he blamed windows for it,is not a secret in fact 2 years latter after blaming windows for the poor draw calls they came with Mantle while DX12 on PC arrived 2 years latter.

The argument is not DX12 does nothing for the xbox one,but DX12 already did what it was suppose to do because most of DX12 is console optimization.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#257 ronvalencia
Member since 2008 • 29612 Posts

@Gue1 said:

This is really nice article to read for those that still believe in the secret sauce.

http://wccftech.com/xbox-one-directx-12-asynchronous-compute-hardware-specifications-ps4-comparison/

-

tl;dr version:

DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API.

-

PS4's console level APIs extra shader compute gains with Async method. Async compute changes the method on how the CPU communicates with the GPU.

The statement

"DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API."

didn't factor in Async compute.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#259 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Bullshit. For Crackdown 3 multiplayer modes, replace Eve Online's thousands of ships with a building destruction sequence.

Total Bullshit as always mixing 2 irrelevant things.

Fact is the xbox one is NOT PROCESSING THOSE PHYSIC the cloud does and stream the results to the xbox one,which is why MS on its last patent talk about H264 which is a video codec fool.

And the reason why the single player is totally different,MS blew smoke up your people ass once again,just like they did with the whole balance crap and DX12.

Bullshit. Your are making arguments without knowledge.

XBO is processing server generated destruction sequence results. This is like processing pre-baked destruction sequence but this "pre-baked" destruction sequence is being dynamically generated by the server.

7770 can run Eve Online with the server doing most of the object tracking and physics calculations.

Unlike you, you have not shipped a commercial fat client-fat server app model so fack-off.

Avatar image for Gue1
Gue1

12171

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#260  Edited By Gue1
Member since 2004 • 12171 Posts

@ronvalencia:

they do factor it. It's a long read (page 4) but the thing is that devs had access to async since day one with 2 ACEs but the PS4 has more ACE's and an overall better set up for async. Even when it comes to API Sony's GNM still provides deeper low level access and more features than DX12. page 3

The cake was always a lie.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#261  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Gue1 said:

@ronvalencia:

they do factor it. It's a long read (page 4) but the thing is that devs had access to async since day one with 2 ACEs but the PS4 has more ACE's and a better set up for async. Even when it comes to API Sony's GNM still provides deeper low level access and more features than DX12. page 3 The cake was always a lie.

Again, the statement

"DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API."

didn't factor in Async compute.

DX 11.X reduces the draw call cost which is different from proper MT scaling and Async compute dispatch.

"Day one" for XBO? bullshit. Crippled Async feature in March 2014 is NOT "day one".

XDX sources debunked "day one" Async enabled claim .

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#262 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Again, the statement

"DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API."

didn't factor in Async compute.

DX 11.X reduces the draw call cost which is different from proper MT scaling and Async compute dispatch.

"Day one" for XBO? bullshit. Crippled Async feature in March 2014 is NOT "day one".

XDX sources debunked "day one" Async enabled claim .

BUllshit man you can't have it both ways how the fu** are you suppose to bypass ROP with compute shaders is compute shaders wasn't supported.? Is a joke on March 2014 it got expose so STFU and drop your pathetic fanboy excuses,funny how you post thing to work around something but then you yourself don't want to admit them as valid.

@ronvalencia said:

Bullshit. Your are making arguments without knowledge.

XBO is processing server generated destruction sequence results. This is like processing pre-baked destruction sequence but this "pre-baked" destruction sequence is being dynamically generated by the server.

7770 can run Eve Online with the server doing most of the object tracking and physics calculations.

Unlike you, you have not shipped a commercial fat client-fat server app model so fack-off.

And streamed to xbox one,which is why the single player is totally different.

Object tracking is not the same as streaming power to the xbox one fool,that kind of crap has been done for years on MMO and doesn't require a huge cloud they never have,just like to do destruction your don't need the cloud either that can be handle by GPU quite fine.

The only crap you have shipped to use is your blind bias for MS and the crappy xbox one,like i already told you i could care less about you wonderful fantasy island theories fact is all of them have turn into dust and the xbox one will trail the PS4 all gen long the gap is to big to overcome period.

I bet my account that by 2018 the xbox one will still be behind the PS4 and still will have the same problems it is facing now...

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#263  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Again, the statement

"DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API."

didn't factor in Async compute.

DX 11.X reduces the draw call cost which is different from proper MT scaling and Async compute dispatch.

"Day one" for XBO? bullshit. Crippled Async feature in March 2014 is NOT "day one".

XDX sources debunked "day one" Async enabled claim .

BUllshit man you can't have it both ways how the fu** are you suppose to bypass ROP with compute shaders is compute shaders wasn't supported.? Is a joke on March 2014 it got expose so STFU and drop your pathetic fanboy excuses,funny how you post thing to work around something but then you yourself don't want to admit them as valid.

@ronvalencia said:

Bullshit. Your are making arguments without knowledge.

XBO is processing server generated destruction sequence results. This is like processing pre-baked destruction sequence but this "pre-baked" destruction sequence is being dynamically generated by the server.

7770 can run Eve Online with the server doing most of the object tracking and physics calculations.

Unlike you, you have not shipped a commercial fat client-fat server app model so fack-off.

And streamed to xbox one,which is why the single player is totally different.

Object tracking is not the same as streaming power to the xbox one fool,that kind of crap has been done for years on MMO and doesn't require a huge cloud they never have,just like to do destruction your don't need the cloud either that can be handle by GPU quite fine.

The only crap you have shipped to use is your blind bias for MS and the crappy xbox one,like i already told you i could care less about you wonderful fantasy island theories fact is all of them have turn into dust and the xbox one will trail the PS4 all gen long the gap is to big to overcome period.

I bet my account that by 2018 the xbox one will still be behind the PS4 and still will have the same problems it is facing now...

There are two shader compute systems in AMD GCN, sync compute and async compute.

Sync compute is processed by GCP unit(s).

Async compute is processed by ACE units.

Both compute methods can write to texture UAV which uses TMUs.

MMO servers actually streams their object states to multiple client machines.

Client 1 machine request changes to object state (user trigger) -------> transfer object state change request -------> Server accept requested object state changes and process object state changes -------> transfers processed object state changes to Client 2 ... nth machines. Client 2 ... nth machines updates their object states.

Big MMOs has regional servers to keep latencies low.

The reason I focused on Eve Online is their large scale. Other lesser MMOs are glorified double digits multiplayer games e.g 64 to 80 players.

For Crackdown 3, they scale up their "process object state changes" stage.

At this time, Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU hence it's all Intel CPUs i.e. each Xeon socket can scale up to 18 cores. Microsoft has +100,000s of Intel Xeon servers.

Since Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU, your claim for Crackdown 3 streams in GPU rendered H264 video files are false i.e. they not the same as Onlive with their cloud GPUs with operational raster hardware.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#264 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Again, the statement

"DX 11.X already has a significantly higher number of draw calls than DX 11. Therefore DX 11.X already possesses most of the significant updates present in DX 12 API."

didn't factor in Async compute.

DX 11.X reduces the draw call cost which is different from proper MT scaling and Async compute dispatch.

"Day one" for XBO? bullshit. Crippled Async feature in March 2014 is NOT "day one".

XDX sources debunked "day one" Async enabled claim .

BUllshit man you can't have it both ways how the fu** are you suppose to bypass ROP with compute shaders is compute shaders wasn't supported.? Is a joke on March 2014 it got expose so STFU and drop your pathetic fanboy excuses,funny how you post thing to work around something but then you yourself don't want to admit them as valid.

@ronvalencia said:

Bullshit. Your are making arguments without knowledge.

XBO is processing server generated destruction sequence results. This is like processing pre-baked destruction sequence but this "pre-baked" destruction sequence is being dynamically generated by the server.

7770 can run Eve Online with the server doing most of the object tracking and physics calculations.

Unlike you, you have not shipped a commercial fat client-fat server app model so fack-off.

And streamed to xbox one,which is why the single player is totally different.

Object tracking is not the same as streaming power to the xbox one fool,that kind of crap has been done for years on MMO and doesn't require a huge cloud they never have,just like to do destruction your don't need the cloud either that can be handle by GPU quite fine.

The only crap you have shipped to use is your blind bias for MS and the crappy xbox one,like i already told you i could care less about you wonderful fantasy island theories fact is all of them have turn into dust and the xbox one will trail the PS4 all gen long the gap is to big to overcome period.

I bet my account that by 2018 the xbox one will still be behind the PS4 and still will have the same problems it is facing now...

You shut the fack up. You can't even correctly calculate the simple PS4's memory bandwidth.

Avatar image for ToScA-
ToScA-

5782

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#265 ToScA-
Member since 2006 • 5782 Posts

Dx12 will probably make a difference. But I don't think it'll be anywhere near 50% and 20%, respectively. Perhaps a tenth of that (5% and 2%), which is still better than nothing :p

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#266 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

There are two shader compute systems in AMD GCN, sync compute and async compute.

Sync compute is processed by GCP unit(s).

Async compute is processed by ACE units.

Both compute methods can write to texture UAV which uses TMUs.

MMO servers actually streams their object states to multiple client machines.

Client 1 machine request changes to object state (user trigger) -------> transfer object state change request -------> Server accept requested object state changes and process object state changes -------> transfers processed object state changes to Client 2 ... nth machines. Client 2 ... nth machines updates their object states.

Big MMOs has regional servers to keep latencies low.

The reason I focused on Eve Online is their large scale. Other lesser MMOs are glorified double digits multiplayer games e.g 64 to 80 players.

For Crackdown 3, they scale up their "process object state changes" stage.

At this time, Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU hence it's all Intel CPUs i.e. each Xeon socket can scale up to 18 cores. Microsoft has +100,000s of Intel Xeon servers.

Since Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU, your claim for Crackdown 3 streams in GPU rendered H264 video files are false i.e. they not the same as Onlive with their cloud GPUs with operational raster hardware.

Go spin elsewhere,clear as day compute shader is say there very CLEAR.

So Async shaders was expose on March 2014 like i say and your own gift state so.

MMO had been around for longer than MS cloud you don't need a cloud to do that a couple of servers would do,in fact Planet Side 2 is a MMO and runs on PS4.

Nothing that can't be done on PS4 as well dude.

Dude drop the excuses,for you every one is wrong but you,and considering you even claim a pathetic game like Alien Isolation is CPU bound contrary to what the whole internet claims i say most of the crap you argue is wrong.

@ronvalencia said:

You shut the fack up. You can't even correctly calculate the simple PS4's memory bandwidth.

Look whos talking the fool who claim the xbox one GPU has 68GB/s + 104GB/s and the rest of the system has nothing...lol

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#267  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

There are two shader compute systems in AMD GCN, sync compute and async compute.

Sync compute is processed by GCP unit(s).

Async compute is processed by ACE units.

Both compute methods can write to texture UAV which uses TMUs.

MMO servers actually streams their object states to multiple client machines.

Client 1 machine request changes to object state (user trigger) -------> transfer object state change request -------> Server accept requested object state changes and process object state changes -------> transfers processed object state changes to Client 2 ... nth machines. Client 2 ... nth machines updates their object states.

Big MMOs has regional servers to keep latencies low.

The reason I focused on Eve Online is their large scale. Other lesser MMOs are glorified double digits multiplayer games e.g 64 to 80 players.

For Crackdown 3, they scale up their "process object state changes" stage.

At this time, Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU hence it's all Intel CPUs i.e. each Xeon socket can scale up to 18 cores. Microsoft has +100,000s of Intel Xeon servers.

Since Microsoft Azure doesn't support cloud AMD/NVIDIA GpGPU, your claim for Crackdown 3 streams in GPU rendered H264 video files are false i.e. they not the same as Onlive with their cloud GPUs with operational raster hardware.

Go spin elsewhere,clear as day compute shader is say there very CLEAR.

So Async shaders was expose on March 2014 like i say and your own gift state so.

MMO had been around for longer than MS cloud you don't need a cloud to do that a couple of servers would do,in fact Planet Side 2 is a MMO and runs on PS4.

Nothing that can't be done on PS4 as well dude.

Dude drop the excuses,for you every one is wrong but you,and considering you even claim a pathetic game like Alien Isolation is CPU bound contrary to what the whole internet claims i say most of the crap you argue is wrong.

@ronvalencia said:

You shut the fack up. You can't even correctly calculate the simple PS4's memory bandwidth.

Look whos talking the fool who claim the xbox one GPU has 68GB/s + 104GB/s and the rest of the system has nothing...lol

Fack off cow dung.

For the March 2014 XDK, all members of this structure must be 0. Only one compute context can be created at a time. Future XDKs will expose more than one compute context at a time, submitting to different hardware queues. The compute context workloads run at low priority compared to graphics immediate context workloads. In a future release, you will be able to adjust priority for each compute context independently, and change a context's priority dynamically

Your "day one" assertion is a load of bull$hit.

http://www.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Xbox One does, however, boast superior performance to PS4 in other ways. Lets say you are using procedural generation or raytracing via parametric surfaces that is, using a lot of memory writes and not much texturing or ALU Xbox One will be likely be faster, said one developer

Avatar image for deactivated-5a30e101a977c
deactivated-5a30e101a977c

5970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#268 deactivated-5a30e101a977c
Member since 2006 • 5970 Posts

Damn tormentos getting owned again

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#269 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Fack off cow dung.

For the March 2014 XDK, all members of this structure must be 0. Only one compute context can be created at a time. Future XDKs will expose more than one compute context at a time, submitting to different hardware queues. The compute context workloads run at low priority compared to graphics immediate context workloads. In a future release, you will be able to adjust priority for each compute context independently, and change a context's priority dynamically

Your "day one" assertion is a load of bull$hit.

http://www.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Xbox One does, however, boast superior performance to PS4 in other ways. Lets say you are using procedural generation or raytracing via parametric surfaces that is, using a lot of memory writes and not much texturing or ALU Xbox One will be likely be faster, said one developer

WTF does that screen even have to do with what i say you fool.?

So i talk about Async been exposed and you quote Async for PS4...lol

I quoted DF on it and even your own gift stated it you fool,how the fu** will you use compute shaders to bypass ROP is async compute wasn't spose.? Hahaha

That whole part is irrelevant you don't make a game to use just procedural generation or raytracing,a game consist of many different things reason why Tessellation been faster on XBO means nothing as well because the xbox one GPU has a lower peak than the PS4 and Tessellation has a hit to performance.

One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster.

What you own link say...

40 to 50% memory reads on PS4 and 50% faster ALU as well..

Don't fight me is your link...

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#271 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Hahahaha, no dev has associated their name with "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." assertion.

PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

HAHAHAHAHAHAHAAHAHAAHHAAHAHAAHAHAHAAHAHAHAAHAHAHAHAAHAHAHAAHAHAHAAHAHAAHAHAHAAHHAAHAAHAHAAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA..

I knew it i knew it... As soon as i quoted the part that give the edge to the PS4 you would downplay your own source.

You are a HYPOCRITE boy big time one of the worse i have seen in this place,so the info is good as long as it defend the xbox one as soon as i quote the PS4 part yeah the info is not valid,what a piece of fanboy you are.

Is there any doubt that Ronvalencia is a true lemming.? Hahahahaa

No Dev associated with that fool because NDA were in place for the xbox one you fool which basically prohibit any one about talking about the hardware,because MS didn't want head to head comparison,NDA was in place for some time and i think it is still.

You want to know something.? The same source that didn't associate with that claim about 40-50% faster memory reads and 50% ALU is then same who say that comment you quote about the xbox one you fool,is the same source so if you discredit my quote based on no developer name associated with that you downplay your own quote as well fool as no developer was associated with it,funny thing is that was before launch,and what happen on launch tell me ron.?

Yeah 720p vs 1080p...lol

The leaks were right...lol

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#273 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

Fack off cow dung.

For the March 2014 XDK, all members of this structure must be 0. Only one compute context can be created at a time. Future XDKs will expose more than one compute context at a time, submitting to different hardware queues. The compute context workloads run at low priority compared to graphics immediate context workloads. In a future release, you will be able to adjust priority for each compute context independently, and change a context's priority dynamically

Your "day one" assertion is a load of bull$hit.

http://www.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Xbox One does, however, boast superior performance to PS4 in other ways. Lets say you are using procedural generation or raytracing via parametric surfaces that is, using a lot of memory writes and not much texturing or ALU Xbox One will be likely be faster, said one developer

WTF does that screen even have to do with what i say you fool.?

So i talk about Async been exposed and you quote Async for PS4...lol

I quoted DF on it and even your own gift stated it you fool,how the fu** will you use compute shaders to bypass ROP is async compute wasn't spose.? Hahaha

That whole part is irrelevant you don't make a game to use just procedural generation or raytracing,a game consist of many different things reason why Tessellation been faster on XBO means nothing as well because the xbox one GPU has a lower peak than the PS4 and Tessellation has a hit to performance.

One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster.

What you own link say...

40 to 50% memory reads on PS4 and 50% faster ALU as well..

Don't fight me is your link...

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. large GDDR5 memory pool > large DDR3 memory pool is true.

PS4's split read and write of 65 GB/s from Sony's 130 GB/s still PS4's read being faster than XBO's 50-54 GB/s.

PS4's split read and write of 88 GB/s from 176 GB/s theoretical still PS4's read being faster than XBO's 68 GB/s theoretical.

XBO's 50 GB/s * 1.4X = 70 GB/s for PS4.

XBO's 50 GB/s * 1.5X = 75 GB/s for PS4.

The statement was made with read and write.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#274 ronvalencia
Member since 2008 • 29612 Posts
@tormentos said:
@ronvalencia said:

Hahahaha, no dev has associated their name with "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." assertion.

PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

HAHAHAHAHAHAHAAHAHAAHHAAHAHAAHAHAHAAHAHAHAAHAHAHAHAAHAHAHAAHAHAHAAHAHAAHAHAHAAHHAAHAAHAHAAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA..

I knew it i knew it... As soon as i quoted the part that give the edge to the PS4 you would downplay your own source.

You got the old buffered version.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#275  Edited By ronvalencia
Member since 2008 • 29612 Posts
@tormentos said:
@ronvalencia said:

Fack off cow dung.

For the March 2014 XDK, all members of this structure must be 0. Only one compute context can be created at a time. Future XDKs will expose more than one compute context at a time, submitting to different hardware queues. The compute context workloads run at low priority compared to graphics immediate context workloads. In a future release, you will be able to adjust priority for each compute context independently, and change a context's priority dynamically

Your "day one" assertion is a load of bull$hit.

http://www.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Xbox One does, however, boast superior performance to PS4 in other ways. Lets say you are using procedural generation or raytracing via parametric surfaces that is, using a lot of memory writes and not much texturing or ALU Xbox One will be likely be faster, said one developer

WTF does that screen even have to do with what i say you fool.?

So i talk about Async been exposed and you quote Async for PS4...lol

I quoted DF on it and even your own gift stated it you fool,how the fu** will you use compute shaders to bypass ROP is async compute wasn't spose.? Hahaha

That whole part is irrelevant you don't make a game to use just procedural generation or raytracing,a game consist of many different things reason why Tessellation been faster on XBO means nothing as well because the xbox one GPU has a lower peak than the PS4 and Tessellation has a hit to performance.

One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster.

What you own link say...

40 to 50% memory reads on PS4 and 50% faster ALU as well..

Don't fight me is your link...

Your "So Async shaders was expose on March 2014 like i say and your own gift state so." statement is flawed since exposed Async features are not completed.

My initial reply to "40 to 50% memory reads on PS4 and 50% faster ALU as well." was the old buffered version.

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. large GDDR5 memory pool > large DDR3 memory pool is true.

PS4's split read and write of 65 GB/s from Sony's 130 GB/s still PS4's read being faster than XBO's 50-54 GB/s.

PS4's split read and write of 88 GB/s from 176 GB/s theoretical still PS4's read being faster than XBO's 68 GB/s theoretical.

XBO's 50 GB/s * 1.4X = 70 GB/s for PS4.

XBO's 50 GB/s * 1.5X = 75 GB/s for PS4.

The statement was made with read and write.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#276 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. GDDR5 > DDR3 is true.

1-18CU vs 12 CU.

50% of 12 CU is 6 CU.

The advantage in CU the PS4 has over the xbox one is 6CU.

This ^^ is where the 50% ALU claim comes from,also when the sources

Also when the 50% claim was made the xbox one was still 800mhz not 853.

And that 30% + Straight forward memory structure and true HSA design make that advantage even bigger.

Project Cars has 40% more pixels,+ and up to 65% faster frames that gap isn't representative of 30% more power dude,that is a combined gap of 105% worse as i am not even taking into account the xbox one has no extra temporal AA which make the gap even bigger.

Dude how much is 24FPS + 65% do the math.

And the xbox one even use an extra CPU core on this game.

720p vs 1080p is a 100+ % gap, 30 vs 60 FPS is 100% gap as well,all those have been recorded vs the xbox one.

So that 30% you claim is producing some big ass gaps,this is why i tell you that theoretical performance mean shit,is the end result what matters dude.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#277 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

You got the old buffered version.

No i did not..

http://www.gamespot.com/forums/games-discussion-1000000/edge-power-struggle-the-real-differences-between-p-29447784/

This is a thread from 2 years ago where the original link and info was posted and yes the same nameless source is the same who state the xbox one part..lol

You discredit my quote you also discredit yours...Hahahaaha

@ronvalencia said:

Your "So Async shaders was expose on March 2014 like i say and your own gift state so." statement is flawed since exposed Async features are not completed.

My initial reply to "40 to 50% memory reads on PS4 and 50% faster ALU as well." was the old buffered version.

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. large GDDR5 memory pool > large DDR3 memory pool is true.

PS4's split read and write of 65 GB/s from Sony's 130 GB/s stillPS4's read being faster than XBO's 50-54 GB/s.

PS4's split read and write of 88 GB/s from 176 GB/s theoretical still PS4's read being faster than XBO's 68 GB/s theoretical.

XBO's 50 GB/s * 1.4X = 70 GB/s for PS4.

XBO's 50 GB/s * 1.5X = 75 GB/s for PS4.

The statement was made with read and write.

The PS4 doesn't have 130GB/s and every time you say that i will quote this source..

How would the unified system architecture and 8GB GDDR5 RAM help in making a better game? Gilray stated that, “It means we don’t have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

Read more at http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing#oL8LqrkB3O7SjyWg.99

And then laugh at you for been a hypocrite fanboy who only give credit to things that favor the xbox one..Hahahaaaaaaaaaaaaaa

172GB/s not 176GB/s which is peak not 130GB/s so he was getting 172GB/s..lol

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#278 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. GDDR5 > DDR3 is true.

1-18CU vs 12 CU.

50% of 12 CU is 6 CU.

The advantage in CU the PS4 has over the xbox one is 6CU.

This ^^ is where the 50% ALU claim comes from,also when the sources

Also when the 50% claim was made the xbox one was still 800mhz not 853.

And that 30% + Straight forward memory structure and true HSA design make that advantage even bigger.

Project Cars has 40% more pixels,+ and up to 65% faster frames that gap isn't representative of 30% more power dude,that is a combined gap of 105% worse as i am not even taking into account the xbox one has no extra temporal AA which make the gap even bigger.

Dude how much is 24FPS + 65% do the math.

And the xbox one even use an extra CPU core on this game.

720p vs 1080p is a 100+ % gap, 30 vs 60 FPS is 100% gap as well,all those have been recorded vs the xbox one.

So that 30% you claim is producing some big ass gaps,this is why i tell you that theoretical performance mean shit,is the end result what matters dude.

At the same GPU clock speed, 1.5X(aka 50 percent stronger) of 12 CU is 18 CU. XBO's GPU clock speed was later increased to 853Mhz hence it's 1.4X (aka 40 percent stronger).

Your screenshot shows XBO has additional car-to-car collision hence it's not even apples to apples anymore.

Counter example for the night scene.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#279  Edited By ronvalencia
Member since 2008 • 29612 Posts
@tormentos said:
@ronvalencia said:

You got the old buffered version.

No i did not..

http://www.gamespot.com/forums/games-discussion-1000000/edge-power-struggle-the-real-differences-between-p-29447784/

This is a thread from 2 years ago where the original link and info was posted and yes the same nameless source is the same who state the xbox one part..lol

You discredit my quote you also discredit yours...Hahahaaha

@ronvalencia said:

Your "So Async shaders was expose on March 2014 like i say and your own gift state so." statement is flawed since exposed Async features are not completed.

My initial reply to "40 to 50% memory reads on PS4 and 50% faster ALU as well." was the old buffered version.

For "One source told Edge that memory reads are 40-50% faster in PS4, and the ALU (Arithmetic Logic Unit) is 50% faster." statement

1. At 800 Mhz XBO GPU, PS4 being 1.50X over XBO is true. At 853Mhz XBO GPU, PS4's ALU is 1.4X faster than XBO or XBO's ALU is 70 percent of PS4.

2. PS4 has the initial read advantage i.e. large GDDR5 memory pool > large DDR3 memory pool is true.

PS4's split read and write of 65 GB/s from Sony's 130 GB/s stillPS4's read being faster than XBO's 50-54 GB/s.

PS4's split read and write of 88 GB/s from 176 GB/s theoretical still PS4's read being faster than XBO's 68 GB/s theoretical.

XBO's 50 GB/s * 1.4X = 70 GB/s for PS4.

XBO's 50 GB/s * 1.5X = 75 GB/s for PS4.

The statement was made with read and write.

The PS4 doesn't have 130GB/s and every time you say that i will quote this source..

How would the unified system architecture and 8GB GDDR5 RAM help in making a better game? Gilray stated that, “It means we don’t have to worry so much about stuff, the fact that the memory operates at around 172GB/s is amazing, so we can swap stuff in and our as fast as we can without it really causing us much grief.

Read more at http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing#oL8LqrkB3O7SjyWg.99

And then laugh at you for been a hypocrite fanboy who only give credit to things that favor the xbox one..Hahahaaaaaaaaaaaaaa

172GB/s not 176GB/s which is peak not 130GB/s so he was getting 172GB/s..lol

Where's your PS4's 1920x1080p MSAA 4X results?

Your PS4 doesn't even show MSAA 4X results like the 7950's 71 percent of 240 GB/s MSAA 4X.

The memory may operate around 172 GB/s with effective memory controller bandwidth of 130 GB/s LOL.

Sony's 130 GB/s effective memory bandwidth for PS4 matches AMD PC GDDR5 memory controller efficiency, hence it stands.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#280 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

At the same GPU clock speed, 1.5X(aka 50 percent stronger) of 12 CU is 18 CU. XBO's GPU clock speed was later increased to 853Mhz hence it's 1.4X (aka 40 percent stronger).

Your screenshot shows XBO has additional car-to-car collision hence it's not even apples to apples anymore.

Counter example for the night scene.

What.? Hahahaaaaaaaaaaaaaaaaaaaaaa

Count the cars in mi screen that you can see on the xbox one side.

I help the PS4 has 2 cars near an 5 more in the distance.

The xbox one has 2 cars near and 5 in the distance...lol

How about here.

Or here more cars on PS4 side 9 FPS difference.

26% faster frames + 44% more pixels + Extra temporal AA so how you would describe this gap.?

I see 70% without AA taken into the equation.

Hell pixel alone is 44% which is more than the 30% gap you hold to.

Avatar image for EG101
EG101

2091

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#281  Edited By EG101
Member since 2007 • 2091 Posts

@tormentos:

The problem with using Project Cars as an example is that PCars is a DX11 game on the XB1 while on PS4 most devs are coding to the metal and making use of all of PS4's resources. It is well documented that the XB1 was designed with DX12 in mind and that DX11 hindered game performance on the XB1. DX12 will enable the XB1 to use the CPU and GPU more efficiently. The CPU will use each core more efficiently and on the GPU side, things like Asynch compute and tile rendering will be at devs disposal. Of course at the end of the day PS4 has a bigger GPU and that means games will probably look better on it but rest assured the difference will only get smaller.

I have a question for you Tormentos.

Why in the images you Posted is there Horrible shimmering on the images in the PS4 version of Project Cars while on the XB1 version the images are clean and crisp???

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#282 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

Where's your PS4's 1920x1080p MSAA 4X results?

Your PS4 doesn't even show MSAA 4X results like the 7950's 71 percent of 240 GB/s MSAA 4X.

The memory may operate around 172 GB/s with effective memory controller bandwidth of 130 GB/s LOL.

Sony's 130 GB/s effective memory bandwidth for PS4 matches AMD PC GDDR5 memory controller efficiency, hence it stands.

Hahahaa and there he goes running again...hahahaaha

Where are the xbox one next gen graphics with MSAA 4X?...

The PS4 game with MSAA 4X look incredible the one on xbox one look like crap...lol

Off course not you blind fool,how the fu** the PS4 will show MSAA 4X results as the 7950 ass.

When MSAA has a damn huge hit to performance,not only are you comparing 2 GPU which one has shared 176GB/s vs one that has 240GB/s for it self but you are comparing 2 GPU with 10 CU difference,hell the 7950 has 384 bit bus how the fu** does the PS4 is suppose to match that.? Does the full 7870 match that how the fu** will the PS4.

The 7950 has more bandwidth,bigger bus and 10 more CU there is not fu**ing way the PS4 can match it in anything hell the 7870 can't either and is stronger than the PS4.

So stop using crappy ass argument to defend the xbox one,which can't even beat a 7770 let alone get close to a 7950 just because some how a pathetic looking game has MSAA 4X,the PS4 does super sampled on Lego the hobbit can the xbox one do that.? Render higher than 1080p.? The PS4 does 1920x1280 super sampled to 1080p,should i claim the PS4 does supper sample like a 7950 or stronger GPU.?

The memory operate at 172GB/s the rest you are adding it unless you quote sony or a developer stating 130GB/s if the most the PS4 can get you don't have a point,worse what make you think that didn't change since the PS4 memory controllers can change is a console not a PC,so maybe they already even overcome the CPU eating bandwidth disproportionally.

Oh wait i forgot only the XBOX ONE can Improve..lol

Oh and then you are forgetting that the PS4 has a direct link between GPU and CPU which move another 20GB/s... But when it comes to PS4 yeah you have memory problems.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#283 tormentos
Member since 2003 • 33784 Posts

@EG101 said:

@tormentos:

The problem with using Project Cars as an example is that PCars is a DX11 game on the XB1 while on PS4 most devs are coding to the metal and making use of all of PS4's resources. It is well documented that the XB1 was designed with DX12 in mind and that DX11 hindered game performance on the XB1. DX12 will enable the XB1 to use the CPU and GPU more efficiently. The CPU will use each core more efficiently and on the GPU side, things like Asynch compute and tile rendering will be at devs disposal. Of course at the end of the day PS4 has a bigger GPU and that means games will probably look better on it but rest assured the difference will only get smaller.

I have a question for you Tormentos.

Why in the images you Posted is there Horrible shimmering on the images in the PS4 version of Project Cars while on the XB1 version the images are clean and crisp???

The xbox one is not DX11,is DX11.X which basically has no overhead vs DX11 on PC.

Second the developer of the game it self stated that the 7% gain DX12 bring to the xbox one,also apply to the PS4 if the code it apply,that mean they are running on even fields,well not so even the xbox one uses 7 cores vs the PS4 6.

Dude the xbox one wasn't designed with DX12 in mind that is some damage control crap,most of DX12 is console like optimization bring to PC,since is console like optimization when you use the code on xbox one will give you minimal gains because most of the code was on xbox one first before DX12 even hit PC.

In fact GCN is a DX11 GPU and that is the GPU inside the xbox one.

Async compute was expose on xbox one since March 2014,and is not a problem here since the PS4 isn't using async shaders for Project Cars either so again both games are on even fields.

I don't think it will get smaller since the PS4 was modify for compute which the xbox one wasn't so i think the gap could grow more as more developer push async shaders.

To answer your question that is the temporal AA the PS4 has which will show it like that when you pause also will show similar to ghosting that affect the game when you pause it not in motion,but did you see how much more darker the xbox one is to the point the light don't penetrate as far as they should.?

Yeah that is the sharpening filter on xbox one used for upscale games,make the game to dark.

The rest is just rain been lift by the cars.

Avatar image for EG101
EG101

2091

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#284 EG101
Member since 2007 • 2091 Posts

@tormentos:

I know its TAA. TAA ruins the image quality imo and the ghosting effect it creates is annoying.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#285  Edited By 04dcarraher
Member since 2004 • 23832 Posts

El tormentos........

1. Ignoring the fact DX11.X shares same mutithreading limitations as DX11......

2. Thats a guess on devs part since they haven't even touched it and even if it is 7% that blows "wont do anything" out of the water. Also PS4 does not use the same limitations as X1 when talking to the gpu as X1 with Cars, and with the gpu being the main limiting factor in graphics performance, it wouldn't matter if the X1 had all eight cores available since the game is more gpu prone than cpu prone.

3. When the X1 was being finalized MS was working on early stages of DX12, and the features like draw bundles was incorporated into the DX11.X. So yes and no X1 was designed to use a better API ie DX12.

DX12 is not console optimization........ Even with console level API with like the 360 still only used one thread on the cpu to handled gpu work. While in 2009+ DX11 had the ability to use deferred workloads. X1 is basically using a modified version of DX11 which still using some its inherent limitations. Console optimization is the work in adjusting the code,assets, usage to fit that particular system's abilities. API is the protocols that allows the software and hardware interact more closely. DX12 is the modernization of hardware and software interaction which is not " Console optimization" Dont confuse low overhead, low level coding and console optimization

4. GCN was designed around AMD's upcoming ideas for their Mantle API which most modern API's are using same features abilities, including DX12 which means that X1 will be able to support most "not all"of the features from DX12.

5. Async is not available on X1 yet. Hardware being there and API denying the ability to use it is the reason why DX12 is needed. Wrong again el tormrntos PS4 and X1 are not on even fields with cars since PS4 gpu is much stronger than X1's gpu.... and the fact that PS4 is not using the same limited technique as X1 since the PS4'scommand list build is using four cores in parallel. Which bypasses DX11 type deferred multithreading.

Once all games start using the modern API features you will see both consoles perform better then they do now with most games especially multiplats following DX11 limits. You denying any positive gains coming from DX12 is just plain dumb, since those gains will translate to the PS4 as well that will continue PS4's hardware dominance.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#286 scatteh316
Member since 2004 • 10273 Posts

Jesus christ at this thread! Let me add my input as hermit who doesn't own either console.

1. Xbone was never designed with DX12 in mind, designing a console is not a 5 minute job, it takes YEARS and when Microsoft were in pro typing stages for Xbone DX12 was not even a glimmer in there eyes.

2. DX12 will not at 50% more CPU and 20% more GPU power unless the developers are completely useless at coding ( HIGHLY unlikely )

3. Consoles developers have always coded to the metal and have always coded as efficiently as possible, that has been true for every console ever released and has not changed with the release of PS4 or Xbone

4. Xbone will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

5. PS4 will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

6. Software alone will not make up for a such a large gap in hardware performance, make the gap smaller? Yes! Get rid of the gap entirely? Never.

PS4 is just faster then Xbone, pure and simple and while software may help to reduce the gap that gap will ALWAYS be there.

If sheer power is what you grave then build a PC!!

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#287 tormentos
Member since 2003 • 33784 Posts

@04dcarraher said:

El tormentos........

1. Ignoring the fact DX11.X shares same mutithreading limitations as DX11......

2. Thats a guess on devs part since they haven't even touched it and even if it is 7% that blows "wont do anything" out of the water. Also PS4 does not use the same limitations as X1 when talking to the gpu as X1 with Cars, and with the gpu being the main limiting factor in graphics performance, it wouldn't matter if the X1 had all eight cores available since the game is more gpu prone than cpu prone.

3. When the X1 was being finalized MS was working on early stages of DX12, and the features like draw bundles was incorporated into the DX11.X. So yes and no X1 was designed to use a better API ie DX12.

DX12 is not console optimization........ Even with console level API with like the 360 still only used one thread on the cpu to handled gpu work. While in 2009+ DX11 had the ability to use deferred workloads. X1 is basically using a modified version of DX11 which still using some its inherent limitations. Console optimization is the work in adjusting the code,assets, usage to fit that particular system's abilities. API is the protocols that allows the software and hardware interact more closely. DX12 is the modernization of hardware and software interaction which is not " Console optimization" Dont confuse low overhead, low level coding and console optimization

4. GCN was designed around AMD's upcoming ideas for their Mantle API which most modern API's are using same features abilities, including DX12 which means that X1 will be able to support most "not all"of the features from DX12.

5. Async is not available on X1 yet. Hardware being there and API denying the ability to use it is the reason why DX12 is needed. Wrong again el tormrntos PS4 and X1 are not on even fields with cars since PS4 gpu is much stronger than X1's gpu.... and the fact that PS4 is not using the same limited technique as X1 since the PS4'scommand list build is using four cores in parallel. Which bypasses DX11 type deferred multithreading.

Once all games start using the modern API features you will see both consoles perform better then they do now with most games especially multiplats following DX11 limits. You denying any positive gains coming from DX12 is just plain dumb, since those gains will translate to the PS4 as well that will continue PS4's hardware dominance.

1-The the problem with your argument is simple,you think DX12 mutithreaded command will actually do anything meaningful for the xbox one and it will not,DX12 is a combination of low overhead features combined in one API,since PC lack all those features,from bundles to lower CPU overhead to higher draw calls and better multithreaded commands all of those make one big upgrade when they hit PC.

On xbox one this isn't the case,which why all thread about so call big gains on xbox one have turn into dust including Project Cars one where it was claim 30 to 40%.?

@04dcarraher said:
@ronvalencia said:

That is the 6 cores the ps4 can use now the xbox one is using 7 cores.

It's closer to 6.5 cores for XBO than 7 cores.

Deferred threads serialization into Immediate thread is a CPU bottle neck.

bu....bu....butttt its using 7 cores! and X1 is already using DX12! Its not going to do anything!

You and Trollvalencia on that thread the very first page and you were riding already the 30 to 40% claims and making fun of me...lol

@ronvalencia said:

@tormentos:

The question AND topic are for frame rate issues for XBO and DX12 improvements.

PC (with AMD GPU)'s improvements are more than 40 percent i.e. just switching to Windows 10 gains about 40 percent fps increase while STILL running DX11.

AMD haven't released a driver specific for PCARS.

What Trollvalencia told me...hhahaaha The topic and frames are issues for XBO DX12.

So what Happen in that thread.?

@tormentos said:
@ronvalencia said:

@tormentos:

The question AND topicare for frame rate issues for XBO and DX12 improvements.

PC (with AMD GPU)'s improvements are more than 40 percent i.e. just switching to Windows 10 gains about 40 percent fps increase while STILL running DX11.

AMD haven't released a driver specific for PCARS.

Hey Lemmingvalencia did you read that like i say it was for PC and what will improve with DX12 on xbox one 7 freaking %... Hahahaaaaaaaaaaaa

Man 7% what is that 2 frames.? because the game is not 60,so is not 7% of 60 frames but even if it was what would that be.?

Look how he say is on PC where the lack of Multithreading ability hurt them most not on xbox one,so i have been right all fu**ing long...hahahaha

OWNED all of you... damn i feel great...hahahaaaaaaaaaaaaa

Argument from Ronvalencia about Slightly Mad Stuidio not knowing anything about DX12 coming in 5..4...3...2.....

Fallow by Frosbite quote about helping and a Metro developer quote..lol

Hahahahaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.. All included you sorry ass got owned...

It was epic seeing you damage control the whole 30 to 40% fiasco when in reality 7% was the real benefit worse the PS4 also version would see improvement if the same code was apply to the PS4 which basically leaved you in the same fu**ing spot with the xbox one greatly behind.

You can say all the shit you want fact is all thread claiming a gain for xbox one from DX12 had turn into dust,we still waiting for the fable gains..lol

2-Wait are you stealing Ronvalencia arguments now or is him your alter account.? Because he say the same shit they don't know because they haven't touch it HYPOCRITES when they were claiming 30 to 40% improvement you were eating it,both of you biased fool eat the whole 30 to 40% you even begin to make fun of me... The thread is there you blind Hypocrite...

http://www.gamespot.com/forums/system-wars-314159282/project-cars-dev-dx12-can-boost-xbox-one-version-3-32085552/?page=2

Hahaha please link me to where SMS state they didn't have the same problem on PS4 than on xbox one,in fact both the 7% gain from optimizing the code and the 7% they claim DX12 would deliver both bring gains to the PS4 so yeah they were pretty even i say,hell even my ass the xbox one use the 7th core.

Your excuses are a joke.

3-The hardware components of the xbox one are DX11,GCN launched in 2011 dude,GTFO with the whole when MS finish the xbox one they were finalizing DX12,DX12 is the xbox one API with a few things missing,you don't get it because you are to emotionally invested already,fact is DX12 = Mantle,Mantle = Console like optimization on PC from AMD to many others have recognize it as that,only fools like you believe DX12 is something totally new and revolutionary yeah that is why AMD which is a hardware company beat MS to the launch line by 2 years..lol

The plan for DX12 is for it to exist as a console-like API, which means PC developers will have the option to gain more performance by developing games close to the metal.

http://www.techtimes.com/articles/4621/20140320/microsoft-announced-dx12-for-pc-mobile-and-xbox-one-details-and-everything-you-need-to-know.htm

Coding to metal,lower CPU overhead this are features of consoles and they have always be my blind an on denial friend.

On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.

It's funny,' says AMD's worldwide developer relations manager of its GPU division, Richard Huddy. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

My argument is Irrefutable dude you can't beat it,denying what DX12 is,is a total joke is console like optimization with some new features,but most low level one which increase frames are CPU based and were on consoles.

Look how he name the xbox 360 and he say CONSOLES can do 10k or 20k draws when PC could not even do 5k without getting into problems.

4-GCN uses Mantle which archtechnica claim is the xbox one API or PS4 on PC.

What’s not being said, but what becomes increasingly hinted at as we read through AMD’s material, is not just that Mantle is a low level API, but rather Mantle is the low level API. As in it’s either a direct copy or a very close derivative of the Xbox One’s low level graphics API. All of the pieces are there; AMD will tell you from the start that Mantle is designed to leverage the optimization work done for games on the next generation consoles, and furthermore Mantle can even use the Direct3D High Level Shader Language (HLSL), the high level shader language Xbox One shaders will be coded against in the first place.

Let’s be very clear here: AMD will not discuss the matter let alone confirm it, so this is speculation on our part. But it’s speculation that we believe is well grounded. Based on what we know thus far, we believe Mantle is the Xbox One’s low level API brought to the PC.

http://www.anandtech.com/show/7371/understanding-amds-mantle-a-lowlevel-graphics-api-for-gcn

This was written before DX12 was even confirm to exits.

The only one who can't fallow the pointer is you here,is clear what Mantle is and what DX12 also is.

DX12 does have some new things the console lack,but not all are useful probably 1 or 2,for example now you can use to different GPU and take advantage of it,but that is something that will not help the xbox one as it doesn't have 2 GPU.

5-Async shader is available already on xbox one,and Tomb Raider is the first game to use it this december,you don't fu**ing make an async game in 2 months,because so call DX12 hit the xbox one on October but on December already Tomb Raider will use async shader that doesn't make any freaking sense what so ever,the problem with your fools is that you believe that because no game use it it is not there,99% of the PS4 games out there doesn't support async shaders does that mean the PS4 didn't have it.? No it just mean sony was ahead of MS implementing it,because well MS wasn't banking on that shit they had a different TV visions and were not worry about async shaders sony on the other hand was and could not stop talking about async compute and shaders and BF4 a launch tittle has it so has Infamous which release 4 months after launch,so the implementation was there because sony was pushing it.

By your and trollvalencia logic the xbox one get DX12 in October and in 1 month already they will implement it on Tomb Raider,when in October that game is probably going gold.

DX12 is on xbox one now and the fact that no developer has use async shaders doesn't mean is not there on PS4 almost no one has use it and it has been there for developers before launch.

You people need to open your eyes and stop been so naive.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#288  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@tormentos said:

1-The the problem with your argument is simple,you think DX12 mutithreaded command will actually do anything meaningful for the xbox one and it will not,DX12 is a combination of low overhead features combined in one API,since PC lack all those features,from bundles to lower CPU overhead to higher draw calls and better multithreaded commands all of those make one big upgrade when they hit PC.

On xbox one this isn't the case,which why all thread about so call big gains on xbox one have turn into dust including Project Cars one where it was claim 30 to 40%.?

@04dcarraher said:
@ronvalencia said:

It's closer to 6.5 cores for XBO than 7 cores.

Deferred threads serialization into Immediate thread is a CPU bottle neck.

bu....bu....butttt its using 7 cores! and X1 is already using DX12! Its not going to do anything!

You and Trollvalencia on that thread the very first page and you were riding already the 30 to 40% claims and making fun of me...lol

The problem is you have no idea what your talking about... and are ignorantly bashing and downplaying. lol again saying DX12 wont do anything meaningful O el tormentos couldnt be more incorrect ......

Removing single threaded based deferred gpu communications and incorporating async features isnt going to improve performance? come on you cant be this dumb and hypocritical ....... You praise PS4 accomplishments with games using Async and not using the limited deferred based workloads and yet you deny any positive additions coming to the X1 finally being able to use same features? Saying DX12 is a fix for PC API missing all new modern features, low overhead and true multithreading ok that fine and dandy, however you ignore that X1's API is missing some features and true multithreading from DX12 as well....

Why are you clumping hype, PR claims, facts, and educated guesses based on facts as all in one as fanboys's lemmings hype? Also again your project cars example is flawed, and incorrect...... stop using it.

Whats funny is that we never suggested the 20,30 or 40% claims we are proving facts educated reasoning showing that your wrong about "no improvements"

Once you start supplying facts , not half truths and BS, with blind ignorant bashing then we will stop making fun and correcting you why your wrong.

Just say these words: DX12 will help the X1 fix its issues and will increase performance to a degree but will not be a game changer with directly competing against the PS4.

Then all will be fine.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#289 tormentos
Member since 2003 • 33784 Posts

@scatteh316 said:

Jesus christ at this thread! Let me add my input as hermit who doesn't own either console.

1. Xbone was never designed with DX12 in mind, designing a console is not a 5 minute job, it takes YEARS and when Microsoft were in pro typing stages for Xbone DX12 was not even a glimmer in there eyes.

2. DX12 will not at 50% more CPU and 20% more GPU power unless the developers are completely useless at coding ( HIGHLY unlikely )

3. Consoles developers have always coded to the metal and have always coded as efficiently as possible, that has been true for every console ever released and has not changed with the release of PS4 or Xbone

4. Xbone will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

5. PS4 will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

6. Software alone will not make up for a such a large gap in hardware performance, make the gap smaller? Yes! Get rid of the gap entirely? Never.

PS4 is just faster then Xbone, pure and simple and while software may help to reduce the gap that gap will ALWAYS be there.

If sheer power is what you grave then build a PC!!

One of the most on the spot post i have seen here in a long time...

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#290 tormentos
Member since 2003 • 33784 Posts

@04dcarraher said:

The problem is you have no idea what your talking about... and are ignorantly bashing and downplaying. lol again saying DX12 wont do anything meaningful O el tormentos couldnt be more incorrect ......

Removing single threaded based deferred gpu communications and incorporating async features isnt going to improve performance? come on you cant be this dumb and hypocritical ....... You praise PS4 accomplishments with games using Async and not using the limited deferred based workloads and yet you deny any positive additions coming to the X1 finally being able to use same features? Saying DX12 is a fix for PC API missing all new modern features, low overhead and true multithreading ok that fine and dandy, however you ignore that X1's API is missing some features and true multithreading from DX12 as well....

Why are you clumping hype, claims, facts, and educated guessed based on facts as all in one as fanboys's lemmings hype? Also again your project cars example is flawed, and incorrect......

Whats funny is that we never suggested the 20,30 or 40% claims we are proving facts educated reasoning showing that your wrong about "no improvements"

Once you start supplying facts , not half truths and BS, with blind ignorant bashing then we will stop making fun and correcting you why your wrong.

Just say these words: DX12 will help the X1 fix its issues and will increase performance to a degree but will not be a game changer with directly competing against the PS4.

Then all will be fine.

I posted links backing me up all you have is your opinion..

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#291  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@tormentos said:

I posted links backing me up all you have is your opinion..

right...... most of your links say one thing but you twist what their saying to fit your argument..... just like i3 vs i5 examples, or saying Async shaders was expose on March 2014, and yet ignore they are not completed nor in use......

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#292  Edited By 04dcarraher
Member since 2004 • 23832 Posts
@tormentos said:

On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance, and that's quite surprising - the PC can actually show you only a tenth of the performance if you need a separate batch for each draw call.

It's funny,' says AMD's worldwide developer relations manager of its GPU division, Richard Huddy. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

Look how he name the xbox 360 and he say CONSOLES can do 10k or 20k draws when PC could not even do 5k without getting into problems.

4-GCN uses Mantle which archtechnica claim is the xbox one API or PS4 on PC.

That interview was recanted and corrected by AMD......

Most games in 2011 were still on Direct x 9 base which has a set limit of draw calls (6000 max). and is solely single threaded.

Hubby's claim falls short, he trying to pass the buck for either a poor hardware implementation, on AMD's front, that does not fit within the API spec and as a result has to bend over backwards in its driver implementation to fit within the spec or he is trying to cover up bad driver code in general and wave the blame at someone else.

" "Huddy added that part of what gets in the way isn’t just Direct X or any other API, it can also be AMD’s driver. “It’s very hard to build a driver that’s very fast,” he said. “There are two difficulties: the software and the hardware. But philosophically it’s possible.” "

Which to this day AMD's DX11 base driver still falls short. While in Win10 with DX12 their actually ahead of the curve since their work with Mantle mirrors many same features/methods with DX12.

AMD 2011 going into 2012 was getting ready to release their GCN architecture which supported upcoming mantle API, and were working on Mantle to some degree in that time frame.

The thing is that 360 may do 10-20k chunks of geometry per frame however even with direct x 9 Pc with a stronger cpu could overcome the overheads and process multiple frames faster than the console, totally cancelling out any real advantage 360 had over a comparable pc cpu. That is a point many over hype console optimization, when dev's claim 2x performance vs comparable pc hardware, its based on hardware that is on par or equal to. By late 2006 pc already had quad core cpu's that were multiple times faster and gpu's 3x faster which provided performance and graphics above consoles by a considerable degree

And many dont see console optimization other side of the coin of making compromises in a closed and resource limited system.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#293 tormentos
Member since 2003 • 33784 Posts

@04dcarraher said:
@tormentos said:

I posted links backing me up all you have is your opinion..

right...... most of your links say one thing but you twist what their saying to fit your argument..... just like i3 vs i5 examples, or saying Async shaders was expose on March 2014, and yet ignore they are not completed nor in use......

Bullshit all my links are specific.

It if you who can't get your head out of your ass,Async compute was exposed in March 2014.

There's a big hole in the Xbox One documentation, with no additional "What's New" notes posted between August 2013 and February 2014. Whether they were simply omitted or do not exist at all isn't clear. However, early 2014 is a crucial period for Microsoft, as it attempts to correct its wonky launch strategy and to address the GPU differential with PlayStation 4 as best as it can. Straight away we see notes indicating that developers now have more control over ESRAM resource management - apparently a bottleneck for many launch titles.

Graphics: Throughout the coming months there are plenty of updates corresponding to the low-level D3D monolithic runtime. Hardware video encoding/decoding is added in March, along with asynchronous GPU compute support. By May, support for the user-mode driver is completely removed in favour of the mono-driver, explaining (in part at least) the marked improvement in Xbox One GPU performance in shipping titles from Q2 2014 onwards.

http://www.eurogamer.net/articles/digitalfoundry-2015-evolution-of-xbox-one-as-told-by-sdk-leak

Async compute was expose, march 2014 there problem was no one was using it.

Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One

http://gearnuke.com/rise-of-the-tomb-raider-uses-async-compute-to-render-breathtaking-volumetric-lighting-on-xbox-one/

How Tomb Raider use it if you claim it is not expose.?

My i3 to i5 example is on the spot the reason the i5 performs better is because it is a stronger faster CPU,which mean the same GPU worked better with an i5 than an i3 in the world of computers that is known as a CPU bottleneck you blind lemming,that CPU wasn't enough to feed that entry GPU....

Either you are to dumb to understand what a CPU bottleneck is or you are playing dumb,which i think it is the case here.

Either way i continue to post link and you continue to look like a blind lemming.

@04dcarraher said:

That interview was recanted and corrected by AMD......

Most games in 2011 were still on Direct x 9 base which has a set limit of draw calls (6000 max). and is solely single threaded.

Hubby's claim falls short, he trying to pass the buck for either a poor hardware implementation, on AMD's front, that does not fit within the API spec and as a result has to bend over backwards in its driver implementation to fit within the spec or he is trying to cover up bad driver code in general and wave the blame at someone else.

" "Huddy added that part of what gets in the way isn’t just Direct X or any other API, it can also be AMD’s driver. “It’s very hard to build a driver that’s very fast,” he said. “There are two difficulties: the software and the hardware. But philosophically it’s possible.” "

Which to this day AMD's DX11 base driver still falls short. While in Win10 with DX12 their actually ahead of the curve since their work with Mantle mirrors many same features/methods with DX12.

AMD 2011 going into 2012 was getting ready to release their GCN architecture which supported upcoming mantle API, and were working on Mantle to some degree in that time frame.

The thing is that 360 may do 10-20k chunks of geometry per frame however even with direct x 9 Pc with a stronger cpu could overcome the overheads and process multiple frames faster than the console, totally cancelling out any real advantage 360 had over a comparable pc cpu. That is a point many over hype console optimization, when dev's claim 2x performance vs comparable pc hardware, its based on hardware that is on par or equal to. By late 2006 pc already had quad core cpu's that were multiple times faster and gpu's 3x faster which provided performance and graphics above consoles by a considerable degree

Drop your sad excuse DX10 release on 2006 man how the fu** DX9 would be the most used in 2011,sure if you counted games from 2002....

The D3D10 runtimes ship as part of Windows Vista, and has been in all (I think) builds I've used over the last year or so. As of December 2005 the SDK started to contain the D3D10 CTP's so anyone can write Direct3D 10 applications.

Last week the first D3D10 hardware was released - the GeForce 8800 series.

https://social.msdn.microsoft.com/Forums/en-US/25b2b390-2fbe-40f1-a0dd-a3057baf6999/directx-10-release-date?forum=direct3d

Since 2005 drop the sad excuse boy.

DX9 on xbox one is not the same as PC this is the damn shit you don't want to understand buffoon,DX9 on xbox one was vanilla with no bloatware and minimal over head why the fu** you think Huddy say it can do 10k or 20k when PC can't do 4k without getting into problems.?

You are spinning to much and i keep killing you with links...lol

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#294 04dcarraher
Member since 2004 • 23832 Posts

lol grasping for straws there eh?

the March 2014 SDK just turned async compute to no longer in preview mode..... Rise of Tomb raider is the first game to use X1's ACE's to any real degree....

The xbox is not using multithreaded Async main point for DX12, its just using the Compute Command Processors with the ACE's to handle gpgpu based work to calculate the volumetric lighting after the rendering of shadows. They are calling it Async compute since their using the ACE's "asynchronous compute engines" in the gpu to compute those aspects of the game. But the main point is that X1 is not using multiple cpu cores to talk to the gpu directly at same time limiting its potential quite abit.

Your i3 vs i5 is flawed and incorrect..... you ignorant cow,(im a lemming? what a joke) using Pcars which its performance on pc is single threaded dependent along with using excessive draw calls on one thread is not they way to show a game being cpu demanding since its improperly coded....... Decreasing and increasing clock rates on same cpu is the correct way to determine if a game is demanding and coded correctly. not artificially cutting a cpu's per thread performance in half when game is single threaded dependent...... 2.5ghz to 4.5 ghz on a i7 was only a 13% difference in fps.

lol really? Direct x 9 with shader model 3 was the primary base used for vast majority of games all the way into 2012 especially with multiplats. DX10 was a flop only a small list of pc games were ever coded for it and it was still single threaded based.

The xbox 360 uses shader model 3 rendering which was a standard in DX9.0c, which means that Pc version used DX9 unless coded to use DX10 or DX11. BF3 was one of the first multiplat games that phased out DX9 totally ....... Vast majority of games all the way into 2013 did a DX9/DX11 option....... Between 2005-2013 DX9 was the base line API for almost all multiplats.......

Direct x9 has limit of 6k draw calls, DX10 introduced shader model 4 and things like geometry batching but was still single threaded based. He was clearly talking about the older DX's like DX9 since DX 11 MT can handle around 2 million draw calls per second through Nvidia's drivers. DX11 MT whoops the hell out of 360's API since 360's also used single thread....

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#295  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos:

@tormentos:

Where are the xbox one next gen graphics with MSAA 4X?...

The PS4 game with MSAA 4X look incredible the one on xbox one look like crap...lol

Your argument is based on artwork subjectivity. MSAA just an edge AA that read depth(geometry) data, read color data and write color data.

@tormentos:

When MSAA has a damn huge hit to performance,not only are you comparing 2 GPU which one has shared 176GB/s vs one that has 240GB/s for it self but you are comparing 2 GPU with 10 CU difference,hell the 7950 has 384 bit bus how the fu** does the PS4 is suppose to match that.? Does the full 7870 match that how the fu** will the PS4.

hahahahahaha, NVIDIA Geforce GTX 680 is not AMD Radeon HD GCN SKU. There are differences between Radeon HD ROPS and Geforce Kelper/Maxwell ROPS.

NVIDIA Kelper/Maxwell has a better color compression/decompression for ROPS over AMD GCN 1.0/.1.1. AMD GCN 1.2 "Tonga" changes this situation.

Again, MSAA just an edge AA that read depth(geometry) data, read color data and write color data.

Again, you comparing apples vs oranges.

@tormentos:

When MSAA has a damn huge hit to performance,not only are you comparing 2 GPU which one has shared 176GB/s vs one that has 240GB/s for it self but you are comparing 2 GPU with 10 CU difference,hell the 7950 has 384 bit bus how the fu** does the PS4 is suppose to match that.? Does the full 7870 match that how the fu** will the PS4.

You again ignored is 7950 has GDDR5 efficiency of about 71 percent i.e. 71 percent of 240 GB/s is 170.4 GB/s practical memory bandwidth.

If you claim PS4 has 172 GB/s practical memory bandwidth, PS4's 32 ROPS should have MSAA 4X results just as fast Radeon HD 7950's 32 ROPS with 170 GB/s practical memory bandwidth. PS4's results are not showing this case.

@tormentos:

10 CU difference

Not a major problem when console games are budgeted to console level ALU/CU power i.e. IF CU bound frame rates are in 30 fps or 60 fps, MSAA co-processors should be able to apply it's edge AA at similar frame rates IF there's sufficient memory bandwidth.

@tormentos:

So stop using crappy ass argument to defend the xbox one,which can't even beat a 7770 let alone get close to a 7950 just because some how a pathetic looking game has MSAA 4X,the PS4 does super sampled on Lego the hobbit can the xbox one do that.? Render higher than 1080p.? The PS4 does 1920x1280 super sampled to 1080p,should i claim the PS4 does supper sample like a 7950 or stronger GPU.?

In several examples, both PS4 and XBO can't even beat 7770 results.

Rendering more pixels (as in higher resolution) affects CUs i.e. the programmers for Lego Hobbit used PS4's CU/ALU advantage and avoid MSAA 4X issues. You are shifting the argument from MSAA to CU bound issues.

Ideally, if both XBO and PS4 are targeting 1920x1080p at 30 fps with the same art/shader assets, the programmers for PS4 should use 1.4X CU extra power for SSAA. Note that 1.4X can only add 1.4X higher resolution over XBO.

@tormentos:

The memory operate at 172GB/s the rest you are adding it unless you quote sony or a developer stating 130GB/s if the most the PS4 can get you don't have a point,worse what make you think that didn't change since the PS4 memory controllers can change is a console not a PC,so maybe they already even overcome the CPU eating bandwidth disproportionally.

Sony's 130 GB/s for PS4 reveals normal AMD GDDR5 memory controller behavior. A non-fanboy would be applying this efficiency number across multiple AMD SKUs.


Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#296  Edited By ronvalencia
Member since 2008 • 29612 Posts

Let me add my input as hermit who doesn't own either console.

@scatteh316 said:

Jesus christ at this thread! Let me add my input as hermit who doesn't own either console.

1. Xbone was never designed with DX12 in mind, designing a console is not a 5 minute job, it takes YEARS and when Microsoft were in pro typing stages for Xbone DX12 was not even a glimmer in there eyes.

2. DX12 will not at 50% more CPU and 20% more GPU power unless the developers are completely useless at coding ( HIGHLY unlikely )

3. Consoles developers have always coded to the metal and have always coded as efficiently as possible, that has been true for every console ever released and has not changed with the release of PS4 or Xbone

4. Xbone will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

5. PS4 will get API upgrades that are always more efficient and developers will always find new tricks to squeeze more from the machine.

6. Software alone will not make up for a such a large gap in hardware performance, make the gap smaller? Yes! Get rid of the gap entirely? Never.

PS4 is just faster then Xbone, pure and simple and while software may help to reduce the gap that gap will ALWAYS be there.

If sheer power is what you grave then build a PC!!

1. As a member of GCN 1.1, AMD Bonaire supports baseline DirectX12's Feature 12_0 i.e. the minimum hardware requirements for being "DirectX12" not just DirectX12 API compatibility.

2.

That's about 20 percent extra performance boost from already low CPU overhead API console.

3. The idea is false for XBO's DX11.X and it's deferred context MT limitations.

A true metal programming would enable the programmer to define the MT model just like AMD's programmers for Mantle driver.

A true metal programming would enable the programmer define Async features just like AMD's programmers for Mantle driver.

Your assertions has failed at both counts.

IF XBO's DX11.X has the entire AMD Mantle features, BradW would not be complaining about XBO's half baked APIs

Avatar image for xisiuizado
Xisiuizado

592

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#297 Xisiuizado
Member since 2014 • 592 Posts

@Shewgenja said:

So, power matters again, now?

Yes, and no. While I'd like my X1 to have a power bump, I don't think it's a big deal. The numbers quoted aren't possible anyway.

Avatar image for xisiuizado
Xisiuizado

592

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#298 Xisiuizado
Member since 2014 • 592 Posts

@blackace said:

The Cloud Tech was just the beginning. lol!! Where's my popcorn. This is going to get good. We just need El Tormo and GrenandeLicker now and this thread is ready to go. Let the Cows meltdowns and salty tears begin.

Lol. I read you post thinking that guy is watching your avatar. :D

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#299 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

@tormentos:

Your argument is based on artwork subjectivity. MSAA just an edge AA that read depth(geometry) data, read color data and write color data.

hahahahahaha, NVIDIA Geforce GTX 680 is not AMD Radeon HD GCN SKU. There are differences between Radeon HD ROPS and Geforce Kelper/Maxwell ROPS.

NVIDIA Kelper/Maxwell has a better color compression/decompression for ROPS over AMD GCN 1.0/.1.1. AMD GCN 1.2 "Tonga" changes this situation.

Again, MSAA just an edge AA that read depth(geometry) data, read color data and write color data.

Again, you comparing apples vs oranges.

You again ignored is 7950 has GDDR5 efficiency of about 71 percent i.e. 71 percent of 240 GB/s is 170.4 GB/s practical memory bandwidth.

If you claim PS4 has 172 GB/s practical memory bandwidth, PS4's 32 ROPS should have MSAA 4X results just as fast Radeon HD 7950's 32 ROPS with 170 GB/s practical memory bandwidth. PS4's results are not showing this case.

Not a major problem when console games are budgeted to console level ALU/CU power i.e. IF CU bound frame rates are in 30 fps or 60 fps, MSAA co-processors should be able to apply it's edge AA at similar frame rates IF there's sufficient memory bandwidth.

In several examples, both PS4 and XBO can't even beat 7770 results.

Rendering more pixels (as in higher resolution) affects CUs i.e. the programmers for Lego Hobbit used PS4's CU/ALU advantage and avoid MSAA 4X issues. You are shifting the argument from MSAA to CU bound issues.

Ideally, if both XBO and PS4 are targeting 1920x1080p at 30 fps with the same art/shader assets, the programmers for PS4 should use 1.4X CU extra power for SSAA. Note that 1.4X can only add 1.4X higher resolution over XBO.

Sony's 130 GB/s for PS4 reveals normal AMD GDDR5 memory controller behavior. A non-fanboy would be applying this efficiency number across multiple AMD SKUs.

1-Bullshit is not base on art work you use that freaking excuse every time i game you like get owned by another,FH2 simple doesn't have the effects and visual quality of The Order it doesn't period is not art is visual quality to the point where The Orders look like a CG.

2-Hahahaa what a stupid ass excuse,who the fu** cares for starters who claim 680gts i AMD you fool.? This is another stupid argument you are creating to distract from the fact that i prove MSAA 4X has a damn hit to performance.

So your apple to Orange argument is null the point is the cost MSAA 4X has,which also apply to GCN fool.

But but but the 680GTX isn't AMD,but but but the 7970 is...lol whats that almost 30FPS hit from MSAA 4X to no AA.?

So how again is the PS4 suppose to match the 7950 again.? You are pathetic...hahahahaa

3-Hahahaaaaaaaaaaaaaaaa. See the problem is that you make some fantastic stupid arguments in order to defend the shit ass xbox one. Hahahahahaa

Here are the problems with that stupid argument.

1-The PS4 having 172GB/s doesn't mean is solely for the GPU,so taking away the 20GB/s for the CPU the PS4 would have 152GB/s which is what the 7850 and 7870 have,again your bandwidth calculations are a joke since those 172GB/s is SYSTEM not GPU alone your comparison is off.

2-The 7950 has a wider bus 384 vs 256,i am sure it achieve more than 170GB/s from your fu** up calculations.

3-Even if the PS4 had 500GB/s still would not handle MSAA 4x as good as the 7950,because is not only lower clocked it also has 10 CU less,bandwidth without power mean shit i have told you this 100 times and you ignore it,you can have a 7770 with 400GB/s and a 384 bit bus and still would not come close performance wise to a 7950 or had MSAA 4X like the 7950,power in GCN come from CU not solely from bandwidth or the bus width.

You are trying to imply that since the 7950 has effectively 170GB/s (say you) and i show a link stating the PS4 operates at 172GB/s that mean the PS4 should match the 7950 on anything which is a total and complete joke,i don't even know how anyone can consider you a hermit with such pathetic arguments.

Fact is 7950 with 240GB/s 348bit bus >>>>>>>>>>>>>> a 7850 with 240GB/s 384 bit bus,the number of computer units would decide who wins not the bus or bandwidth.

Now claim my example doesn't exist fact is it does,the 7770 on PC owns the xbox one in several games and has not only a 128 bit bus but also has 72GB/s...lol

@ronvalencia said:

Not a major problem when console games are budgeted to console level ALU/CU power i.e. IF CU bound frame rates are in 30 fps or 60 fps, MSAA co-processors should be able to apply it's edge AA at similar frame rates IF there's sufficient memory bandwidth.

Hahahaahaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa....

This one is so good i had to quote it again Ronvalencia claiming 10CU difference doesn't matter because they are on consoles.

Dude if the PS4 had 10CU more than the xbox one the ownage would be even bigger what the fu** man,and before you even dare claim CPU bullshit holding back the GPU,is total shit when the PS4 can use a great amount of that extra power for compute runnings,simulations,physics and even AI,basically freeing a hell of allot of CPU time for GPU feeding.

You just claimed that 10CU difference are not a major problem,you just make one of the most biased and blind arguments ever..

7970>7950>7870>7850>7790>7770>7750

This ^^ is GCN and it is a line the one with more CU performs better that is a FACT.

So 10 CU we are talking 7850 vs 7950 and you claim is not a major problem.?

4-Yes on games that are butcher and make like total crap,like Alien Isolation and Dishonored,a well coded game on PS4 will surpass the 7770 without problems is the xbox one the one which really has problems beating it.

From Ryse to Dead Rising 3,that are games made first for the xbox one,the 7770 still beat the xbox one results,in both with higher resolution to.

NO i just used and example of something that requires more power to prove my point,the weaker console can't do it,and MSAA 4X has a cost to performance.

6-Total bullshit AMD test doesn't show CPU using that bandwidth and how it affect it,that was a screen from pre launch era,you are a hypocrite to the xbox one can improve to the moon,and fix all its freaking problems by software (which hasn't happen) but the PS4 can fix a problem with the CPU using bandwidth disproportionally...hahaha ...

NIce to see you ignore my 20GB/s connection from Onion which is 10/10 and which doesn't use the 176GB/s in any way...

Yeah i guess that doesn't count...lol

@ronvalencia said:

Let me add my input as hermit who doesn't own either console.

1. As a member of GCN 1.1, AMD Bonaire supports baseline DirectX12's Feature 12_0 i.e. the minimum hardware requirements for being "DirectX12" not just DirectX12 API compatibility.

2.

That's about 20 percent extra performance boost from already low CPU overhead API console.

3. The idea is false for XBO's DX11.X and it's deferred context MT limitations.

A true metal programming would enable the programmer to define the MT model just like AMD's programmers for Mantle driver.

A true metal programming would enable the programmer define Async features just like AMD's programmers for Mantle driver.

Your assertions has failed at both counts.

IF XBO's DX11.X has the entire AMD Mantle features, BradW would not be complaining about XBO's half baked APIs

1-You are not a HERMIT all you do here all day long every single days for years now is defend the xbox,at this point i have no other option but you think you are MS shill or you work for MS,there is no way in hell a true hermit will waste so much effort defending the crappy xbox one hardware you are insane and you pull some very barbaric and stupid arguments in order to defend the xbox one,like you just did now trying to imply 10CU difference was not as major problem.

6CU difference had been enough to produce gaps as big as 30 vs 60 and 720p vs 1080p and some how you want to pretend 10 CU at higher clock speed are not a major problem.?

GTFO if you only showed this level of bias toward all hardware i wouldn't consider you a lemming,you truly hate sony..hahahaaha

2-I love this one 20% extra power using a PS4 chart,oh wait the PS4 has 18 CU not 12,so you are not getting the same 20% gain when the PS4 has 50% more CU than the xbox one,and before you come with your damage control math.

12 + 50% = 18 CU.

So yeah you are probably getting 8% with lock,but why 8% you say.? Oh it simple the XBO isn't modify as good for compute shaders as the PS4 is so you could be getting 2 hits instead of just 50% less CU alone.

3-Yeah that is one thing,DX12 on PC is not just better MT,DX12 has low level access to hardware,lower CPU overhead,more clean API is not just better MT,which is the problem with the xbox one and PS4 as well,they are console so most of DX12 low overhead was already inside both console before DX11 was even release let alone DX12.

Consoles have always have low CPU overhead to the metal coding and all that crap,better MT is one of the few things they are getting and like Slightly Mad Studios claimed 7% is what will bring to their game,when on PC is 40% you can see the huge disparity and get why the xbox one is not a PC and doesn't have all the pitfalls DX11 has on PC.

Not only that workaround exist to work with those problems for more than a year now.

There is not DX11.x on xbox one any more is DX12,and Tomb Raider use async compute like the PS4 you don't implement that in 1 day Tomb Raider come out in less than 2 months,and was build using DX12 so yeah Tomb Raider is using async the xbox one DX12 is complete for some time now,and you are the only fool on denial,if Async Compute wasn't expose how the fu** Tomb Raider would use it on xbox one.?

Rise of the Tomb Raider uses async compute to render breathtaking volumetric lighting on Xbox One

http://gearnuke.com/rise-of-the-tomb-raider-uses-async-compute-to-render-breathtaking-volumetric-lighting-on-xbox-one/

You people just don't use common sense,just because developers are not using something doesn't mean it is not there,and that old ass chart you have is wrong,async is expose and in use and Tomb Raider which comes in less than 2 month use it for volumetric lighting...lol

The time for butts it over but but is not expose but but MT expose..lol

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#300  Edited By tormentos
Member since 2003 • 33784 Posts

@04dcarraher said:

lol grasping for straws there eh?

the March 2014 SDK just turned async compute to no longer in preview mode..... Rise of Tomb raider is the first game to use X1's ACE's to any real degree....

The xbox is not using multithreaded Async main point for DX12, its just using the Compute Command Processors with the ACE's to handle gpgpu based work to calculate the volumetric lighting after the rendering of shadows. They are calling it Async compute since their using the ACE's "asynchronous compute engines" in the gpu to compute those aspects of the game. But the main point is that X1 is not using multiple cpu cores to talk to the gpu directly at same time limiting its potential quite abit.

Your i3 vs i5 is flawed and incorrect..... you ignorant cow,(im a lemming? what a joke) using Pcars which its performance on pc is single threaded dependent along with using excessive draw calls on one thread is not they way to show a game being cpu demanding since its improperly coded....... Decreasing and increasing clock rates on same cpu is the correct way to determine if a game is demanding and coded correctly. not artificially cutting a cpu's per thread performance in half when game is single threaded dependent...... 2.5ghz to 4.5 ghz on a i7 was only a 13% difference in fps.

lol really? Direct x 9 with shader model 3 was the primary base used for vast majority of games all the way into 2012 especially with multiplats. DX10 was a flop only a small list of pc games were ever coded for it and it was still single threaded based.

The xbox 360 uses shader model 3 rendering which was a standard in DX9.0c, which means that Pc version used DX9 unless coded to use DX10 or DX11. BF3 was one of the first multiplat games that phased out DX9 totally ....... Vast majority of games all the way into 2013 did a DX9/DX11 option....... Between 2005-2013 DX9 was the base line API for almost all multiplats.......

Direct x9 has limit of 6k draw calls, DX10 introduced shader model 4 and things like geometry batching but was still single threaded based. He was clearly talking about the older DX's like DX9 since DX 11 MT can handle around 2 million draw calls per second through Nvidia's drivers. DX11 MT whoops the hell out of 360's API since 360's also used single thread....

No you are the one grasping.

The fact that no game use async mean shit,99% of PS4 games don't use it and on PC the same,so i am right it was expose if not one use it that is another 2 cents,so claiming is not there when it is,is simply damage control.

MT is not the major point of DX12,in fact it just bring 7% gain to Project Cars when on PC brings 40% that tell you that MT account for just a small part of the whole package,since other low level features are already on xbox one the one missing brings 7% which is something but to small.

That is what Infamous do and is an async shader game dude what the fu**...lol

Yeah on xbox 360 which used that shit,DX10 had been out since 2005 on SDK since 2005,and release in 2006,there is no way that most games build in 2011 were using DX9.Is another butthurt damage control of yours,funny enough DX9 on xbox one did more draws than DX9 on PC a shit load more by Huddy's argument from 10k to 20k when DX9 on PC could not do 3k without getting into problems.?

That alone show how huge the gain was on xbox 360 compare to PC,at 10k were are talking 3+ times more draws on xbox 360,at 15k we are talking about 5 times more draws,at 20K close to 7 times more draws than DX9 on PC that is freaking huge and show how optimization and lower CPU overhead were delivering in strides on xbox 360 compare to PC oh and on PC there were stronger faster CPU than the 360.

You are an idiot this is simple....

Is not improperly coded you blind fool,it uses allot of draw calls on PC which the i5 can deliver better than the i3,your assertion of been badly coded would apply if the i3 and i5 would be getting the same bad performance across the board,the game just use allot of draw which are CPU ISSUE in other words the fu**ing i3 can't deliver enough draws for 50FPS on that 750ti,the i5 can because it faster stronger that is call CPU bound you butthurt lemming..

There's plenty of scope for scalability in Project Cars on PC, but while Nvidia's stalwart GTX 750 Ti makes a good fist of matching PS4 visuals, a quad-core processor is required to maintain frame-rate in vehicle-heavy races. It's rare that we become CPU-bound with an entry-level enthusiast GPU like the 750 Ti, but the results here speak for themselves.

http://www.eurogamer.net/articles/digitalfoundry-2015-project-cars-face-off

GTFO it is CPU bound and even DF admit it,the conditions and reason for the to be CPU bound are meaningless the fact is with a i5 you can get 16 FPS more than from a i3 which is more than ok for most games out there,that is consider a CPU overhead you fool,i advise you to freaking learn what CPU bound means,regardless of you thinking the game is badly coded (which is not) the 750 ti got more frames from a stronger CPU which mean the weaker one was holding back the GPU that is consider as an effective CPU bound scenario and i just prove it with another link while you keep using your biased ass opinion.

CPU can't feed the GPU fast enough.? Yeah that is a CPU bound scenario and Draw calls are CPU issue so it is the CPU the problem downplaying it because it is draw calls the problem Draw calls are't issue by the GPU they are issue by the CPU.

You are talking about 360 games that were ported to PC,or started on 360,on PC most probably by 2009 already DX10 is what was use let alone 2011.

http://www.hardocp.com/article/2008/04/08/crysis_dx9_vs_dx10/7#.Vf7Wj9mqqko

Even Freaking Crysis which is a late 2007 game had DX10 support on 2008,so again excuses.

That point here is simple draw calls have never being a problem on consoles only on PC they were,and is funny because what increase those draw calls in DX12 is using more than 1 core to feed the GPU,yet on consoles that wasn't an issue even when they didn't use more than 1 core to feed the GPU.