PS4 and Xbox One GDC 2014 Presentation : ROPS

  • 83 results
  • 1
  • 2
#51 Edited by ronvalencia (15573 posts) -
@tormentos said:

@ronvalencia said:

Nothing specific on 1920x1080p. All it states is color-ROPS are less than memory bandwidth. You haven't factored Z-ROPS which is 4X over ROPS count.

Unlike TC's post, Rebellion's statement was specific for 1920x1080p native output.

R9-290's 64 ROPS at 947Mhz would be still "ROPS bound" with RGBA8(integer) against 320 GB/s memory bandwidth e.g. the math: 947Mhz x 64 x 4 byte = 242 GB/s.

The big flaw with their math is the hardware color decompression/compression factor. There's more to it than the simple math i.e. the GPU has to fit in compute, TMU, Z-ROP, color-ROP, 'etc' workloads into the finite memory bandwidth.

JIT decompress and compress hardware will throw-off any simple memory bandwidth vs ROPS math. From http://www.bit-tech.net/hardware/graphics/2007/05/16/r600_ati_radeon_hd_2900_xt/8

If you notice, AMD's ROPS unit has multiple read and write ports. TC's post focused on the Blend section i.e. the color-ROPS. AMD increased the ratio between Z-ROPS(depth/stencil ROP) vs color-ROPS (Blend ROPS) i.e. 2:1 to 4:1.

-------------

Again, I can gimp my R9-290 with 69 GB/s memory bandwidth and still deliver 1920x1080p e.g. Tomb Raider 2013 at 35.5 fps and Ultimate settings (with TressFX). I can increase my fps with shadow resolution set to normal and texture quality set to high.

.

ROP bound period a developers says it move on..lol

I love how you want to give credit only to arguments that serve you best even selectively quoting sources,so the part you like is true like ROP not been a problems like DF claim,but then completely going into denial on things like Bonaire been inside the xbox one which DF also claim.

You can't have it both ways either DF is good for both claims or it isn't for any..

@Solid_Max13 said:

@ronvalencia: you clearly understand tech but the issue is what many devs have stated and how they utilize the tech which I thus case 1080p is not viable right now on the xbone and won't be for some time until they can utilize the hardware properly tomb raider was a last gen game and isn't hard to get to 1080p but the newer games is where you'll see the drop

The developer he quote Rebellion say it ESRAM is to small and they have to do tricks and use tilling,tricks mean changing things lowering assets,turning off effects or doing them at half resolution like Tomb Raider case and or decreasing frames or reducing resolution..

My argument is back up by game performance so far his arguments are back up by nothing,i have been quoting here benchmarks from Anandtech for months,telling Ronvalencia what the difference would be,and he refuse to admit it,he claims the 7770 didn't represent the xbox one cache,ESRAM,bandwidth and bus,so in his eyes all that would change the world,in reality it didn't and the xbox one acted even worse than a 7770 which actually can hit 1080p in quite allot of games under certain quality.

Hell the gap i use to post here from the 7850 to the 7770 was even smaller,that the gap between the xbox one and PS4 games like Tomb Raider running at the same quality on the 7770 don't run as much as 30 FPS slower while having lower resolution effects and 900p cut scenes and having worse textures like the xbox one does vs the PS4,the real life gap is what talks.

Sorry, Rebellion's statement is specific for 1920x1080p rendering issue, while TC's source information is just "ROPS is lower than memory bandwidth".

TC's asserted the claim for 16 ROPS at 853Mhz is the main gimp factor for 1920x1080p rendering.

PC's 7770 SKU will stay in 72 GB/s limitation with zero option for ESRAM boost.

Does Tomb Raider DE Xbox One utilize Rebellion's specific criteria for their 1920x1080p results?

DF also has down-clocked 7870 XT at 600Mhz = PS4 and down-clocked 7850 at 600Mhz= X1. Read http://www.eurogamer.net/articles/digitalfoundry-can-xbox-one-multi-platform-games-compete-with-ps4

Unlike DF's 7870 XT at 600Mhz vs 7850 at 600Mhz, the prototype 7850 maintains 1.32 TFLOPS = 12 CU at 860Mhz clock speed. DF has approximated X1's ESRAM (with tiling tricks) with PC SKUs.

--------

Note that DDRx/GDDRx has overheads i.e. data integrity refresh cycles. Any paper spec memory bandwidth math will not hold true.

From http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

Oddworld claims 172 GB/s for PS4's memory bandwidth while TC's information claims 176GB/s (the number was derived from pure paper spec).

Since GDDR5 is not a super low latency memory, 176 GB/s claim for PS4 (or any DDR type memory) is a joke. No chip vendor has GDDR5 as it's register storage memory type. GDDR5 will not get paper spec memory bandwidth.

#52 Posted by scatteh316 (5022 posts) -

@ronvalencia said:

@scatteh316 said:

@navyguy21: If you know you're hardware it's quite easy to see the implications.

ESRAM can read and write at the same time but has to miss a cycle in every 8 which is lost performance, the ROPs were built into the daughter die on Xbox 360 making it ideal and perfect for the EDRAM to be used as a frame buffer. It's not the same situation now on Xbox One.

When it comes to running at native 1080p it's clear to everyone that understates the rendering pipe line that Xbox One is always going to struggle.

And rovalencia doesn't know anything, he just pastes the same old crap about his Rebellion developers and his silly 290 experiments.

It's you who doesn't know anything. Unlike your source, Rebellion has specifically addressed 1920x1080p. Rebellion developers > you.

Your not factoring Xbox 360 is limited to 8 color ROPS at 500Mhz (4 Gigapixel/s) and the connection between the GPU and ESRAM is limited to 32 GB/s.

Unlike Xbox 360, Xbox One's ROPS can write to DDR3 and ESRAM, and it has higher 16 ROPS at 853Mhz.

From hotchip.org's diagram, Xbox One's peak BW is 204 GB/s for it's ESRAM.

My source is a MULTI PLATFORM developer you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

#53 Edited by ronvalencia (15573 posts) -

@btk2k2 said:

@ronvalencia said:

It's you who doesn't know anything. Unlike your source, Rebellion has specifically addressed 1920x1080p. Rebellion developers > you.

Your not factoring Xbox 360 is limited to 8 color ROPS at 500Mhz (4 Gigapixel/s) and the connection between the GPU and ESRAM is limited to 32 GB/s.

Unlike Xbox 360, Xbox One's ROPS can write to DDR3 and ESRAM, and it has higher 16 ROPS at 853Mhz.

From hotchip.org's diagram, Xbox One's peak BW is 204 GB/s for it's ESRAM.

There are 3 issues with the Xbox One that make it difficult to achieve 1080p and they are all linked together.

1) Low bandwidth main memory pool.

2) Low capacity high bandwidth memory pool

3) 16 ROPS.

In some cases the developer can fit the render target into the ESRAM and make use of its high bandwidth, in that scenario the limiting factor for 1080p is the number of ROPS as you are ROP bound.

In other cases the developer cannot fit the render target into ESRAM so they have to use the DDR3, in that scenario the limiting factor for 1080p is the bandwidth as you become bandwidth bound.

Your benchmark is flawed for all the reasons scatteh316 stated and atleast 2 addtional reasons, first of all you are using a game that is not really pushing the envelope in terms of fill rate requirements, secondly you are using a built in game benchmark and it is widely known that both AMD and Nvidia optimise the hell out of in game benchmarks so that their hardware reviews better.

There will be ways to get 1080p on the Xbox One as is self evident by the fact that some games do work at 1080p already, the problem is that it takes a fair few sacrifices in other aspects of the image quality to achieve that because of the 3 limitations above, if your render target is too big you are bandwidth starved and if it is the right size you are ROP starved, the latter is the least bad but it is not ideal.

If you need to resort to using tiling tricks so that some of the render target is in the ESRAM and some is in the DDR3 trying to manage that in such a way that the low bandwidth RAM will not cause problems is not going to be easy, and unless MS can create tools to automate this I do not see multiplatform developers going through the effort when they can just use 900p or 720p and be done with it.

1. Not in dispute.

2. Not in dispute. Read Rebellion's statement on this issue.

3. 16 ROPS at 853 Mhz ~= 17 ROPS at 800Mhz effective.

In some cases the developer can fit the render target into the ESRAM and make use of its high bandwidth, in that scenario the limiting factor for 1080p is the number of ROPS as you are ROP bound.

You just claimed real games doesn't do anything besides color ROPS (color fill rates). With typical Cow's logic, we can bin GPU's compute (via UAV), TMUs, Z-ROPS, MSAA. In term of unit count, TMUs Z-ROPS and MSAA processors exceeds the color ROPS.

Killzone Shadow Fall's target render size is 800 MB.

For X1

RGBA8's 54 GB/s / 60 fps = 0.9 GB per frame color-ROPS bandwidth budget.

RGBA16F's 109GB/s / 60 fps= 1.8 GB per frame color-ROPS bandwidth budget.

-------------

For PS4

RGBA8's 102 GB/s / 60 fps = 1.7 GB per frame color-ROPS bandwidth budget.

RGBA16F's 176 GB/s / 60 fps= 2.93 GB per frame color-ROPS bandwidth budget.

#54 Edited by ronvalencia (15573 posts) -

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@navyguy21: If you know you're hardware it's quite easy to see the implications.

ESRAM can read and write at the same time but has to miss a cycle in every 8 which is lost performance, the ROPs were built into the daughter die on Xbox 360 making it ideal and perfect for the EDRAM to be used as a frame buffer. It's not the same situation now on Xbox One.

When it comes to running at native 1080p it's clear to everyone that understates the rendering pipe line that Xbox One is always going to struggle.

And rovalencia doesn't know anything, he just pastes the same old crap about his Rebellion developers and his silly 290 experiments.

It's you who doesn't know anything. Unlike your source, Rebellion has specifically addressed 1920x1080p. Rebellion developers > you.

Your not factoring Xbox 360 is limited to 8 color ROPS at 500Mhz (4 Gigapixel/s) and the connection between the GPU and ESRAM is limited to 32 GB/s.

Unlike Xbox 360, Xbox One's ROPS can write to DDR3 and ESRAM, and it has higher 16 ROPS at 853Mhz.

From hotchip.org's diagram, Xbox One's peak BW is 204 GB/s for it's ESRAM.

My source is a MULTI PLATFORM developer you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

My source is a MULTI PLATFORM developer (with AMD Gaming Evolved + Mantle) you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

The so-called inner working of PS4 and Xbox One DID NOT deliver superior gaming results over my old 7950 you fool. The "inner working" of PS4 and Xbox One is a yawn since the PC has AMD's Mantle option.

Unlike your source, Rebellion has specifically addressed 1920x1080p issue while you asserted a claimed on top of your source.

#55 Posted by StormyJoe (6539 posts) -

@scatteh316 said:

Simply put, Xbox One is severely hampered by ROP performance and will struggle to deliver native 1080p games due to fill rate and bandwidth constraints.

PS4 is also some what bandwidth starved but has a big advantage over Xbox One when it comes to ROP performance making 1080p easily achievable on PS4 compared to Xbox.

http://www.bilder-upload.eu/upload/179726-1396172332.jpg

http://www.bilder-upload.eu/upload/a14170-1396172466.jpg

Long story short, Xbox One might improve over the coming months but native 1080p will NOT be common place on Xbox due to bandwidth and ROP limitations.

Link

Edit: Chrome won't let me add the images properly :/

I bet you accounts that you are wrong.

#56 Posted by tormentos (20183 posts) -

@ronvalencia said:

Sorry, Rebellion's statement is specific for 1920x1080p rendering issue, while TC's source information is just "ROPS is lower than memory bandwidth".

TC's asserted the claim for 16 ROPS at 853Mhz is the main gimp factor for 1920x1080p rendering.

PC's 7770 SKU will stay in 72 GB/s limitation with zero option for ESRAM boost.

Does Tomb Raider DE Xbox One utilize Rebellion's specific criteria for their 1920x1080p results?

DF also has down-clocked 7870 XT at 600Mhz = PS4 and down-clocked 7850 at 600Mhz= X1. Read http://www.eurogamer.net/articles/digitalfoundry-can-xbox-one-multi-platform-games-compete-with-ps4

Unlike DF's 7870 XT at 600Mhz vs 7850 at 600Mhz, the prototype 7850 maintains 1.32 TFLOPS = 12 CU at 860Mhz clock speed. DF has approximated X1's ESRAM (with tiling tricks) with PC SKUs.

--------

Note that DDRx/GDDRx has overheads i.e. data integrity refresh cycles. Any paper spec memory bandwidth math will not hold true.

From http://gamingbolt.com/oddworld-inhabitants-dev-on-ps4s-8gb-gddr5-ram-fact-that-memory-operates-at-172gbs-is-amazing

Oddworld claims 172 GB/s for PS4's memory bandwidth while TC's information claims 176GB/s (the number was derived from pure paper spec).

Since GDDR5 is not a super low latency memory, 176 GB/s claim for PS4 (or any DDR type memory) is a joke. No chip vendor has GDDR5 as it's register storage memory type. GDDR5 will not get paper spec memory bandwidth.

Yes they are very specific ESRAM is to small and in order to do 1080p they need to do tricks and tilling,that mean cutting corners we all know it,it is you who don't to admit it because well you are to hard headed.

Tomb Raider shatter all my benchmarks the gap is way wider than the charts i use to post from anandtech so yeah i was RIGHT you weren't and ESRAM like i claim,like some developers claims,like leaks claims and even like Sony's own Cerny claim is a pain and add complexity to hardware,is not a magic wand that will magically make the crappy GPU inside the xbox one perform like something much stronger.

So the xbox one has several things against it.

Is 6 CU handicap vs the PS4 for more than 500Gflops difference.

Is has a trouble some and complex memory system.

It has DX which has never been better than Sony tools for getting the most out of the hardware.

It doesn't have true HSA it has something like it and doesn't has hUMA either.

Remember how people use to claim that the xbox one had a CPU advantage because GDDR5 wasn't good for CPU.? How did that turn out.? Test even show the PS4 having faster CPU even while been 100mhz slower.

What people like you don't take into account is that the xbox one has 3 OS,Running a guess OS on hyper - V has a cost to CPU and memory as well,from 9% to 12% CPU over head,things like this add allot.

I don't know if they use Rebellion criteria and neither do you,all i know is that the game is 1080p when games like AC are 900p on xbox one,and games like thief are 900p as well,so you take a guess if they where or not using Rebellions ways,i just hope that Rebellion would not have to do like Kujima did and had to eat their words,i remember how Kujima claimed the difference in MGS5 wasn't great.

The difference in power between the PS4 and Xbox One is small and nothing to worry about, Hideo Kojima has told 4Gamer.

Kojima was at E3 with Metal Gear Solid 5, revealing the game for Xbox One and PS4.

"The difference is small, and I don't really need to worry about it," he said, suggesting versions for Xbox One and PS4 won't be dramatically different.

http://www.videogamer.com/xboxone/metal_gear_solid_5_the_phantom_pain/news/ps4_and_xbox_one_power_difference_is_minimal_says_kojima.html

Remember this ^^^ because i do. Kujima saying the difference was small...

So is 720p with no dynamic clouds vs 1080p with dynamic clouds a small difference to you.?

I just hope rebellions doesn't have to eat their words like Kujima did,because it will be a sad day for you here.

Oh and who can forget the legendary test done by Richart MS lover Leadbetter,where he stated that the xbox one GPU resemble a 7790 but they pic a 7850 for the test and a 7870XT for the PS4 test,all with the intended purpose of helping the xbox one case.

And what was the result.? They test Tomb Raider and the difference was like 12 to 13 frames per second at the same quality in everything.

The PS4 commands up to 30 FPS advantage over the xbox one version,on average is 20 FPS faster, that is an up to 100% gap in frames.

But wait the xbox one version has effects with lower quality,effects at half the resolution,lower quality textures in some places,and 900p cut scenes,now this is not small by any way,is way way waaaaaaaaayyyyyyyyy bigger than the gap DF showed for Tomb Raider.

DF could have use a 7790 to represent the xbox one and downclock it to 1.31TF and use a 7850 and over clock it to 1.84 TF,it would have not been perfect but it would have been way better than what they did,because regardless of the downgrade in speed 16 CU at 600mhz will work better than 12 CU at 857 mhz,they chose a GPU that had 4 more CU than the xbox one,and one with more 6 CU than the PS4 as well.

Not only that the PS4 also receive the worse part because down clocking the 7870 XT to 600mhz is a freaking huge drop the 7870TX is almost 1ghz,so going down to 600mhz means almost a 40% drop,when the 7850 is 860mhz lose just 260mhz from the downgrade the 7850 XT almost 400mhz drop.

The test was a joke and the actual Tomb Raider game showed that quite easy.

#57 Posted by Netherscourge (16354 posts) -

This is getting so old.

Yes, we know - the PS4 is more powerful and can handle prettier graphics than the XB1.

And the #1 reason for this is Microsoft's poor choice of DDR3-RAM over GDDR5-RAM. 32MB of eSRAM does not make up for that fact and never will.

This was established LAST YEAR. We don't need anymore clarification. It's set in stone and will not change this generation. It is a hardware limitation and it's soldered onto the XB1 motherboard. So no, you can't "upgrade" it.

MS - Drop the price $150 so it's actually selling at what the hardware is worth and will actually compete with the PS4.

#58 Posted by StormyJoe (6539 posts) -

@I_can_haz said:

XBone 720p is a weak POS confirmed.

"b...bu....bu....but teh dirextX 12 boost!!!1!" LOL TLHBO

HOw about this. I say, by 1016, there will be very few, if any, 1080p PS4 games that aren't 1080p on XBox One.

Disagree? Lets bet accounts? If I am wrong, I will retire this account. If you are wrong, you retire yours.

Deal?

#59 Edited by Oemenia (10252 posts) -

I dont if its just me but both consoles have really mediocre GPUs. The 360 and even the PS3 had GPUs that were not on the market, with the XBOX also being an upgraded version of the high-end GPU at the time.

#60 Edited by scatteh316 (5022 posts) -

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@navyguy21: If you know you're hardware it's quite easy to see the implications.

ESRAM can read and write at the same time but has to miss a cycle in every 8 which is lost performance, the ROPs were built into the daughter die on Xbox 360 making it ideal and perfect for the EDRAM to be used as a frame buffer. It's not the same situation now on Xbox One.

When it comes to running at native 1080p it's clear to everyone that understates the rendering pipe line that Xbox One is always going to struggle.

And rovalencia doesn't know anything, he just pastes the same old crap about his Rebellion developers and his silly 290 experiments.

It's you who doesn't know anything. Unlike your source, Rebellion has specifically addressed 1920x1080p. Rebellion developers > you.

Your not factoring Xbox 360 is limited to 8 color ROPS at 500Mhz (4 Gigapixel/s) and the connection between the GPU and ESRAM is limited to 32 GB/s.

Unlike Xbox 360, Xbox One's ROPS can write to DDR3 and ESRAM, and it has higher 16 ROPS at 853Mhz.

From hotchip.org's diagram, Xbox One's peak BW is 204 GB/s for it's ESRAM.

My source is a MULTI PLATFORM developer you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

My source is a MULTI PLATFORM developer (with AMD Gaming Evolved + Mantle) you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

The so-called inner working of PS4 and Xbox One DID NOT deliver superior gaming results over my old 7950 you fool. The "inner working" of PS4 and Xbox One is a yawn since the PC has AMD's Mantle option.

Unlike your source, Rebellion has specifically addressed 1920x1080p issue while you asserted a claimed on top of your source.

Fuck sake will stop comparing these console GPU's to you PC graphics cards.

They're not in the same fucking environment, not running the same code and on top of that your 7950 doesn't lose ~20Gb/s of fucking bandwidth to the CPU, have you factored that into your tests?? NO!!... Apples to oranges.... Let's compare the Andreno 330 GPU in my phone while we're at....

Xbone needs TRICKS to get a 1080p frame bugger with render targets into ESRAM... that is universally known as Xbox 360 had to do the same thing...... PS4 does not need to do this.

You gimped your 290 to 69Gb/s and used that as a comparison but again did you factor in the 20Gb/s that your CPU produces independently? That same 290 producing 69Gb/s would loose another 20Gbs to the Jaguar CPU, so in a console that 69Gb/s would end up being ~49Gb/s...

Your tests are very flawed...

#61 Posted by AM-Gamer (5270 posts) -

@navyguy21: Do you honestly think the Xbox one is more powerful? Is this really up for debate?

#62 Posted by Heil68 (47174 posts) -

Well when you create the worlds most powerful video game console in the history of video games, everything else just falls short.

#63 Edited by ronvalencia (15573 posts) -

@tormentos said:

Yes they are very specific ESRAM is to small and in order to do 1080p they need to do tricks and tilling,that mean cutting corners we all know it,it is you who don't to admit it because well you are to hard headed.

Tomb Raider shatter all my benchmarks the gap is way wider than the charts i use to post from anandtech so yeah i was RIGHT you weren't and ESRAM like i claim,like some developers claims,like leaks claims and even like Sony's own Cerny claim is a pain and add complexity to hardware,is not a magic wand that will magically make the crappy GPU inside the xbox one perform like something much stronger.

So the xbox one has several things against it.

Is 6 CU handicap vs the PS4 for more than 500Gflops difference.

Is has a trouble some and complex memory system.

It has DX which has never been better than Sony tools for getting the most out of the hardware.

It doesn't have true HSA it has something like it and doesn't has hUMA either.

Remember how people use to claim that the xbox one had a CPU advantage because GDDR5 wasn't good for CPU.? How did that turn out.? Test even show the PS4 having faster CPU even while been 100mhz slower.

What people like you don't take into account is that the xbox one has 3 OS,Running a guess OS on hyper - V has a cost to CPU and memory as well,from 9% to 12% CPU over head,things like this add allot.

I don't know if they use Rebellion criteria and neither do you,all i know is that the game is 1080p when games like AC are 900p on xbox one,and games like thief are 900p as well,so you take a guess if they where or not using Rebellions ways,i just hope that Rebellion would not have to do like Kujima did and had to eat their words,i remember how Kujima claimed the difference in MGS5 wasn't great.

The difference in power between the PS4 and Xbox One is small and nothing to worry about, Hideo Kojima has told 4Gamer.

Kojima was at E3 with Metal Gear Solid 5, revealing the game for Xbox One and PS4.

"The difference is small, and I don't really need to worry about it," he said, suggesting versions for Xbox One and PS4 won't be dramatically different.

http://www.videogamer.com/xboxone/metal_gear_solid_5_the_phantom_pain/news/ps4_and_xbox_one_power_difference_is_minimal_says_kojima.html

Remember this ^^^ because i do. Kujima saying the difference was small...

So is 720p with no dynamic clouds vs 1080p with dynamic clouds a small difference to you.?

I just hope rebellions doesn't have to eat their words like Kujima did,because it will be a sad day for you here.

Oh and who can forget the legendary test done by Richart MS lover Leadbetter,where he stated that the xbox one GPU resemble a 7790 but they pic a 7850 for the test and a 7870XT for the PS4 test,all with the intended purpose of helping the xbox one case.

And what was the result.? They test Tomb Raider and the difference was like 12 to 13 frames per second at the same quality in everything.

The PS4 commands up to 30 FPS advantage over the xbox one version,on average is 20 FPS faster, that is an up to 100% gap in frames.

But wait the xbox one version has effects with lower quality,effects at half the resolution,lower quality textures in some places,and 900p cut scenes,now this is not small by any way,is way way waaaaaaaaayyyyyyyyy bigger than the gap DF showed for Tomb Raider.

DF could have use a 7790 to represent the xbox one and downclock it to 1.31TF and use a 7850 and over clock it to 1.84 TF,it would have not been perfect but it would have been way better than what they did,because regardless of the downgrade in speed 16 CU at 600mhz will work better than 12 CU at 857 mhz,they chose a GPU that had 4 more CU than the xbox one,and one with more 6 CU than the PS4 as well.

Not only that the PS4 also receive the worse part because down clocking the 7870 XT to 600mhz is a freaking huge drop the 7870TX is almost 1ghz,so going down to 600mhz means almost a 40% drop,when the 7850 is 860mhz lose just 260mhz from the downgrade the 7850 XT almost 400mhz drop.

The test was a joke and the actual Tomb Raider game showed that quite easy.

7790 would not reflect X1's GPU at it's upper limit (with tiling tricks) since 7790 doesn't have X1's higher memory bandwidth for it's memory bandwidth consumers e.g. color ROPS, MSAA, Z-ROPS, TMUs, compute and 'etc;

Real life games has read and write ROPS not pure write operations. Microsoft's fill rate example combines Z-ROP and color-ROP.

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth

"render target is 32bpp [bits per pixel]" = color ROPS.

"depth/stencil surface is 32bpp with Z [depth]" = Z-ROPS aka depth/stencil ROP. The letter "Z" denote depth data i.e. it's a 3D workload.

If you think color ROP is the only game in town, you might as well asked Sony to remove depth/stencil ROP support.

We are not even factoring TMU's workloads on top of ROPS''s memory consumption.

------------------

Unlike Kojima, Rebellion already claimed specific results i.e.

1. PS4 = 1920 x 1080p, 60 fps target with 50-40 fps drop.

2. X1 = 1920 x 1080p, 60 fps target with slightly slower than 50-40 fps drop.

3. Rebellion already stated the specific criteria for their claim, while Kojima was vague i.e. Kojima didn't factor in ESRAM's best practise programming requirements.

4. Rebellion is a known AMD Gaming Evolved+Mantle developer e.g. more experience with Radeon HD GPUs and they have the wiliness to do the hard programming work i.e. employ the "tiling tricks" for X1. Also, X1's new SDK helps with Rebellion's claim. You are using older game examples that are outside of Rebellion's criteria for their claim.

#64 Edited by ronvalencia (15573 posts) -

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@ronvalencia said:

@scatteh316 said:

@navyguy21: If you know you're hardware it's quite easy to see the implications.

ESRAM can read and write at the same time but has to miss a cycle in every 8 which is lost performance, the ROPs were built into the daughter die on Xbox 360 making it ideal and perfect for the EDRAM to be used as a frame buffer. It's not the same situation now on Xbox One.

When it comes to running at native 1080p it's clear to everyone that understates the rendering pipe line that Xbox One is always going to struggle.

And rovalencia doesn't know anything, he just pastes the same old crap about his Rebellion developers and his silly 290 experiments.

It's you who doesn't know anything. Unlike your source, Rebellion has specifically addressed 1920x1080p. Rebellion developers > you.

Your not factoring Xbox 360 is limited to 8 color ROPS at 500Mhz (4 Gigapixel/s) and the connection between the GPU and ESRAM is limited to 32 GB/s.

Unlike Xbox 360, Xbox One's ROPS can write to DDR3 and ESRAM, and it has higher 16 ROPS at 853Mhz.

From hotchip.org's diagram, Xbox One's peak BW is 204 GB/s for it's ESRAM.

My source is a MULTI PLATFORM developer you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

My source is a MULTI PLATFORM developer (with AMD Gaming Evolved + Mantle) you bellend who knows a dumb sight more about the inner workings of PS4 and Xbone then You ever will.

The so-called inner working of PS4 and Xbox One DID NOT deliver superior gaming results over my old 7950 you fool. The "inner working" of PS4 and Xbox One is a yawn since the PC has AMD's Mantle option.

Unlike your source, Rebellion has specifically addressed 1920x1080p issue while you asserted a claimed on top of your source.

Fuck sake will stop comparing these console GPU's to you PC graphics cards.

They're not in the same fucking environment, not running the same code and on top of that your 7950 doesn't lose ~20Gb/s of fucking bandwidth to the CPU, have you factored that into your tests?? NO!!... Apples to oranges.... Let's compare the Andreno 330 GPU in my phone while we're at....

Xbone needs TRICKS to get a 1080p frame bugger with render targets into ESRAM... that is universally known as Xbox 360 had to do the same thing...... PS4 does not need to do this.

You gimped your 290 to 69Gb/s and used that as a comparison but again did you factor in the 20Gb/s that your CPU produces independently? That same 290 producing 69Gb/s would loose another 20Gbs to the Jaguar CPU, so in a console that 69Gb/s would end up being ~49Gb/s...

Your tests are very flawed...

If my tests are flawed, then your ROPS info with max paper memory bandwidth is also flawed.

1. You did not factor the direct connection between CPU (via FCL) and GPU(vis IOMMU) on the consoles. One of the main points for AMD's Fusion is to reduce memory bandwidth usage by routing some data transfers via AMD's Fusion Link (or similar link in the consoles). PS4's Onion/Onion+ has a dual 10GB/s links i.e. for both non-coherent and coherent transfers. X1 has about 30 GB/s coherent link to the GPU.

During direct CPU-to-GPU transfers (e.g. command list transfers) or CPU working on it's cache (e.g. command list generation), the GPU can have it's max memory bandwidth (e.g. for TMU, Z-ROPS, ROPS, MSAA).

2. You did not factor in PC's CPU/NB->PCI-E version 3.0 16X read and write to PC GPU card I.e. a combined ~32 GB/s rate (16 GB/s each way). This data transfer hit's GPU's crossbar switch and memory controllers. PC's PCI-E lanes acts like the CPU's front-side-bus for CPU-to-GPU interactions.

Your AMD Fusion knowledge is very flawed. The AMD GCN based consoles are not old school IGPs.

#65 Posted by scatteh316 (5022 posts) -

@ronvalencia:

If you think the ROP numbers are flawed then I suggest you tell the DEVELOPER that posted them that he is wrong and you are right.

And all that bullshit you posted below that is completely irrelevant to my points, why did you not address my quries? Instead you just came out with complete carp and tried to change the subject to other areas of the machines subsystems.

#66 Edited by ReadingRainbow4 (16203 posts) -

@-RocBoys9489- said:

wtf is going on; buy a PC ya twats

Damn That's a pretty large oversight, it just keeps getting worse and worse for the bone the more that's discovered.

#67 Posted by tormentos (20183 posts) -

@ronvalencia said:

7790 would not reflect X1's GPU at it's upper limit (with tiling tricks) since 7790 doesn't have X1's higher memory bandwidth for it's memory bandwidth consumers e.g. color ROPS, MSAA, Z-ROPS, TMUs, compute and 'etc;

Real life games has read and write ROPS not pure write operations. Microsoft's fill rate example combines Z-ROP and color-ROP.

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth

"render target is 32bpp [bits per pixel]" = color ROPS.

"depth/stencil surface is 32bpp with Z [depth]" = Z-ROPS aka depth/stencil ROP. The letter "Z" denote depth data i.e. it's a 3D workload.

If you think color ROP is the only game in town, you might as well asked Sony to remove depth/stencil ROP support.

We are not even factoring TMU's workloads on top of ROPS''s memory consumption.

------------------

Unlike Kojima, Rebellion already claimed specific results i.e.

1. PS4 = 1920 x 1080p, 60 fps target with 50-40 fps drop.

2. X1 = 1920 x 1080p, 60 fps target with slightly slower than 50-40 fps drop.

3. Rebellion already stated the specific criteria for their claim, while Kojima was vague i.e. Kojima didn't factor in ESRAM's best practise programming requirements.

4. Rebellion is a known AMD Gaming Evolved+Mantle developer e.g. more experience with Radeon HD GPUs and they have the wiliness to do the hard programming work i.e. employ the "tiling tricks" for X1. Also, X1's new SDK helps with Rebellion's claim. You are using older game examples that are outside of Rebellion's criteria for their claim.

Yeah lets ignore that the xbox one doesn't reflex the damn 7790 full power which is even worse.

WTF man stop been so on denial and so ass kisser with MS,the 7790 doesn't have 150GB''s from ESRAM but the xbox one doesn't have 2 GB of 106GB/s bandwidth is has 32MB of small ass fast memory,oh wait it doesn't have 1.79 TF it doesn't have 1ghz speed or 14 CU man stop the xbox one GPU is gimped vs the 7790,the PS4 can have 200Gb/s still has lower power than a 7870 that is a FACT.

So the 7790 doesn't reflect the xbox one bandwidth,yeah so what the xbox one doesn't reflex the 7790 power either so a 7790 with 106Gb/s will outperform the xbox one period it will outperforms it.

So you are grasping as straws when you try to hide on such a stupid thing as ESRAM,you have been saying that for months and you were OWNED.

Tomb Raider 1080p on both platforms 30 FPS on xbox one vs up to 60 FPS on PS4 on average 20 frames faster,and has higher quality effects on PS4 as well as 1080p cut scenes you ESRAM theories bombed,and until Rebellions show something impressive running on both platforms equally you still are owned,Kujima say the same and the difference was 100% higher resolution on PS4 and effects not found on the xbox one version that isn't small in any way.

Hell you want to know something.? I know the xbox one doesn't have a ROP limitation,in fact the 7790 is 16 ROP and can hit 1080p quite good in many games,and even the 7770 does it more than the xbox one to.

http://www.anandtech.com/bench/product/777?vs=776

Even that in some quality setting it fall to 20 FPS the 7770 can hit 1080p so the ROP aren't the problems,ESRAM is and has been since launch and will be until the end of the generation because it is to small,the xbox one have to sacrifice something to hit 1080p.

The hypocrisy with your argument is simple while you want people to see that the 7790 lack some things the xbox one has,you want people to ignore that the xbox one doesn't have the power or speed of a full 7790 regardless of how much bandwidth it has,bandwidth is a problem only when your GPU becomes severely starved by it,which isn't the case with the 7790,the xbox one has more bandwidth than the 7770 has which has the same equivalent power now the xbox one should be doing better than the 7770 because it has 2 more CU so even that both are close in flops the extra CU work better than the higher clock on the 7770.

DF should have use the 7790 and down clock it,and over clock the 7850 but it didn't fit their purpose of making the xbox one look better than it would at launch.

http://www.videogamer.com/ps4/metal_gear_solid_ground_zeroes/news/metal_gear_solid_5_ground_zeroes_looks_slightly_better_on_ps4_than_xbox_one_kojima.html

Yeah Kujima say latter the PS4 version look slightly better.

2-Rebellion can make the game identical it means little without seeing quality effects of the game,BF4 is not the same as Titanfall graphics wise,i am sure the PS4 could run it at 1080p with no problem while the xbox one choke at little higher than 720p.

3-You don't know that please link me to Kujima saying that they didn't use ESRAM effectively you are assuming,and your assumptions have been totally wrong before.

4-Yeah funny how now Rebellion is the be all end all of developers...lol I saw the game on PC and is not even impressive and i saw it on PC been demo by Rebellions,so yeah spare me the oh Rebellion are gods and no other developers is using ESRAM well you sound like a damn blind lemming.

Fact still stand the PS4 is stronger it will always be and the xbox one will never ever catch up keep waiting.

#68 Edited by ronvalencia (15573 posts) -

@tormentos said:

@ronvalencia said:

7790 would not reflect X1's GPU at it's upper limit (with tiling tricks) since 7790 doesn't have X1's higher memory bandwidth for it's memory bandwidth consumers e.g. color ROPS, MSAA, Z-ROPS, TMUs, compute and 'etc;

Real life games has read and write ROPS not pure write operations. Microsoft's fill rate example combines Z-ROP and color-ROP.

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth

"render target is 32bpp [bits per pixel]" = color ROPS.

"depth/stencil surface is 32bpp with Z [depth]" = Z-ROPS aka depth/stencil ROP. The letter "Z" denote depth data i.e. it's a 3D workload.

If you think color ROP is the only game in town, you might as well asked Sony to remove depth/stencil ROP support.

We are not even factoring TMU's workloads on top of ROPS''s memory consumption.

------------------

Unlike Kojima, Rebellion already claimed specific results i.e.

1. PS4 = 1920 x 1080p, 60 fps target with 50-40 fps drop.

2. X1 = 1920 x 1080p, 60 fps target with slightly slower than 50-40 fps drop.

3. Rebellion already stated the specific criteria for their claim, while Kojima was vague i.e. Kojima didn't factor in ESRAM's best practise programming requirements.

4. Rebellion is a known AMD Gaming Evolved+Mantle developer e.g. more experience with Radeon HD GPUs and they have the wiliness to do the hard programming work i.e. employ the "tiling tricks" for X1. Also, X1's new SDK helps with Rebellion's claim. You are using older game examples that are outside of Rebellion's criteria for their claim.

Yeah lets ignore that the xbox one doesn't reflex the damn 7790 full power which is even worse.

WTF man stop been so on denial and so ass kisser with MS,the 7790 doesn't have 150GB''s from ESRAM but the xbox one doesn't have 2 GB of 106GB/s bandwidth is has 32MB of small ass fast memory,oh wait it doesn't have 1.79 TF it doesn't have 1ghz speed or 14 CU man stop the xbox one GPU is gimped vs the 7790,the PS4 can have 200Gb/s still has lower power than a 7870 that is a FACT.

So the 7790 doesn't reflect the xbox one bandwidth,yeah so what the xbox one doesn't reflex the 7790 power either so a 7790 with 106Gb/s will outperform the xbox one period it will outperforms it.

So you are grasping as straws when you try to hide on such a stupid thing as ESRAM,you have been saying that for months and you were OWNED.

Tomb Raider 1080p on both platforms 30 FPS on xbox one vs up to 60 FPS on PS4 on average 20 frames faster,and has higher quality effects on PS4 as well as 1080p cut scenes you ESRAM theories bombed,and until Rebellions show something impressive running on both platforms equally you still are owned,Kujima say the same and the difference was 100% higher resolution on PS4 and effects not found on the xbox one version that isn't small in any way.

Hell you want to know something.? I know the xbox one doesn't have a ROP limitation,in fact the 7790 is 16 ROP and can hit 1080p quite good in many games,and even the 7770 does it more than the xbox one to.

http://www.anandtech.com/bench/product/777?vs=776

Even that in some quality setting it fall to 20 FPS the 7770 can hit 1080p so the ROP aren't the problems,ESRAM is and has been since launch and will be until the end of the generation because it is to small,the xbox one have to sacrifice something to hit 1080p.

The hypocrisy with your argument is simple while you want people to see that the 7790 lack some things the xbox one has,you want people to ignore that the xbox one doesn't have the power or speed of a full 7790 regardless of how much bandwidth it has,bandwidth is a problem only when your GPU becomes severely starved by it,which isn't the case with the 7790,the xbox one has more bandwidth than the 7770 has which has the same equivalent power now the xbox one should be doing better than the 7770 because it has 2 more CU so even that both are close in flops the extra CU work better than the higher clock on the 7770.

DF should have use the 7790 and down clock it,and over clock the 7850 but it didn't fit their purpose of making the xbox one look better than it would at launch.

http://www.videogamer.com/ps4/metal_gear_solid_ground_zeroes/news/metal_gear_solid_5_ground_zeroes_looks_slightly_better_on_ps4_than_xbox_one_kojima.html

Yeah Kujima say latter the PS4 version look slightly better.

2-Rebellion can make the game identical it means little without seeing quality effects of the game,BF4 is not the same as Titanfall graphics wise,i am sure the PS4 could run it at 1080p with no problem while the xbox one choke at little higher than 720p.

3-You don't know that please link me to Kujima saying that they didn't use ESRAM effectively you are assuming,and your assumptions have been totally wrong before.

4-Yeah funny how now Rebellion is the be all end all of developers...lol I saw the game on PC and is not even impressive and i saw it on PC been demo by Rebellions,so yeah spare me the oh Rebellion are gods and no other developers is using ESRAM well you sound like a damn blind lemming.

Fact still stand the PS4 is stronger it will always be and the xbox one will never ever catch up keep waiting.

2. Again, you didn't read Rebellion's statement.

Again,

For PS4: 1920x1080 with 60 fps target and frame rate drops to 50-40 fps.

For Xbox One: 1920x1080 with 60 fps target and Xbox One being slightly slower than PS4 i.e. slightly slower than PS4's 50-40 frame per second rate drop.

My 8870M can render the same effects as my R9-290/R9-290X, but there's a large frame rate difference.

PS4's ability to run TitanFall at 1920x1080 is not the issue.

3. Rebellion already explained why most Xbox One games are less than 1920x1080p and it's more than just the ESRAM issue. Your not factoring the timeline difference with MGS5 vs Sniper Elite 3.

Your "Fact" is LOL since Rebellion already addressed this POV with Xbox One being "slightly slower" than PS4. Again, you are not reading Rebellion's statements.

So, you question Rebellion's credibility based on their last-gen console artwork issues? Didn't you know artwork comparative discussions are inherently subjective? PC's hardware benchmark practice = qualitative numbers with common program base, NOT some Bachelor Arts type discussions.

Rebellion has written it's own 3D engine which is more than the majority of the game developers i.e. most license from 3D engine vendors and has bias towards artwork R&D budgets.

#69 Edited by btk2k2 (440 posts) -

@ronvalencia said:

2. Again, you didn't read Rebellion's statement.

Again,

For PS4: 1920x1080 with 60 fps target and frame rate drops to 50-40 fps.

For Xbox One: 1920x1080 with 60 fps target and Xbox One being slightly slower than PS4 i.e. slightly slower than PS4's 50-40 frame per second rate drop.

My 8870M can render the same effects as my R9-290/R9-290X, but there's a large frame rate difference.

PS4's ability to run TitanFall at 1920x1080 is not the issue.

3. Rebellion already explained why most Xbox One games are less than 1920x1080p and it's more than just the ESRAM issue. Your not factoring the timeline difference with MGS5 vs Sniper Elite 3.

Your "Fact" is LOL since Rebellion already addressed this POV with Xbox One being "slightly slower" than PS4. Again, you are not reading Rebellion's statements.

So, you question Rebellion's credibility based on their last-gen console artwork issues? Didn't you know artwork comparative discussions are inherently subjective? PC's hardware benchmark practice = qualitative numbers with common program base, NOT some Bachelor Arts type discussions.

Rebellion has written it's own 3D engine which is more than the majority of the game developers i.e. most license from 3D engine vendors and has bias towards artwork R&D budgets.

I remember prior to release that these dev statements were used by you and other lems to show that the Xbox One is not that far behind the PS4. The issue is that people who know about technology and GPUs are not moved by appeals to authority when we know full well what the specification differences imply. Now here you are trotting out another developer statement in another appeal to authority and it does not work on me because I know what the specs imply, I know that the statement is incomplete/vague and I also know what has happened with past appeals to authority.

It is possible that Rebellion have managed to minimse the differences but there could be a multitude of reasons for it, perhaps the game does not have a very aggressive IQ target so it is easier on the hardware making 60FPS easier to achieve, perhaps the PS4 version has more special effects or enhanced AA over the Xbox One version on top of the frame rate boost, maybe what rebellion say when they use the term 'slightly slower' is more for PR than an actual reflection on reality or could it be a combination of these? Until we have the game and we can compare it we do not know.

Also I have no idea why you jumped on Tormentos's comment when he said that the game did not look that great on PC, he did not even state weather it was due to the artwork or weather it was due to some other aspect of IQ that is not subjective.

#70 Edited by tormentos (20183 posts) -

@btk2k2 said:

I remember prior to release that these dev statements were used by you and other lems to show that the Xbox One is not that far behind the PS4. The issue is that people who know about technology and GPUs are not moved by appeals to authority when we know full well what the specification differences imply. Now here you are trotting out another developer statement in another appeal to authority and it does not work on me because I know what the specs imply, I know that the statement is incomplete/vague and I also know what has happened with past appeals to authority.

It is possible that Rebellion have managed to minimse the differences but there could be a multitude of reasons for it, perhaps the game does not have a very aggressive IQ target so it is easier on the hardware making 60FPS easier to achieve, perhaps the PS4 version has more special effects or enhanced AA over the Xbox One version on top of the frame rate boost, maybe what rebellion say when they use the term 'slightly slower' is more for PR than an actual reflection on reality or could it be a combination of these? Until we have the game and we can compare it we do not know.

Also I have no idea why you jumped on Tormentos's comment when he said that the game did not look that great on PC, he did not even state weather it was due to the artwork or weather it was due to some other aspect of IQ that is not subjective.

The best part of his is argument is how he want to picture that because the xbox one GPU as more bandwidth that automatically mean it would perform like a stronger GPU,in this case the 7790 which has no problem hitting 1080p under better conditions than the xbox one to.

So for him it doesn't matter that the xbox one GPU isn't 1.79 TF like the 7790 or that the xbox one has 12 working CU when the 7790 has 14,and that the xbox one GPU is clocked at 857mhz while the 7790 has 1007 mhz,no all that doesn't matter the only thing that matter is that ESRAM give the xbox one more bandwidth.

Having more bandwidth on a weak GPU is like having a 5 lane race track while your car is a tuned down Hyundai Accent not even the normal version just a tuned down one,so instead of reaching 110mph you get 90 mph,but you have a 5 lane race track so maybe you can catch up that Hyundai Elantra with a stronger motor better wheels and more custom to run.

#71 Edited by ronvalencia (15573 posts) -
@scatteh316 said:

@ronvalencia:

If you think the ROP numbers are flawed then I suggest you tell the DEVELOPER that posted them that he is wrong and you are right.

And all that bullshit you posted below that is completely irrelevant to my points, why did you not address my quries? Instead you just came out with complete carp and tried to change the subject to other areas of the machines subsystems.

The BS is from your post.

On the PC, you haven't factored in PCI-E's bandwidth consumption into GPU card's VRAM.

The consoles has other means to bypass memory bandwidth consumption issues for it's CPU i.e. the direct link between CPU (via FCL type link) and GPU(via IOMMU).

An example of GDDR5's overheads, from https://www.skhynix.com/products/graphics/view.jsp?info.ramKind=26&info.serialNo=H5GQ2H24AFRhttps://www.skhynix.com/products/graphics/view.jsp?info.ramKind=26&info.serialNo=H5GQ2H24AFR

Programmable CAS latency: 5 to 20 tCK

Programmable WRITE latency: 1 to 7 tCK

32ms, auto refresh (16k cycles).

Any overheads will result in less than paper spec numbers.

There's a reason why register level storage uses the fastest known memory storage technology and it's not DDRx/XDRx based memory.

#72 Edited by ronvalencia (15573 posts) -
@btk2k2 said:

@ronvalencia said:

2. Again, you didn't read Rebellion's statement.

Again,

For PS4: 1920x1080 with 60 fps target and frame rate drops to 50-40 fps.

For Xbox One: 1920x1080 with 60 fps target and Xbox One being slightly slower than PS4 i.e. slightly slower than PS4's 50-40 frame per second rate drop.

My 8870M can render the same effects as my R9-290/R9-290X, but there's a large frame rate difference.

PS4's ability to run TitanFall at 1920x1080 is not the issue.

3. Rebellion already explained why most Xbox One games are less than 1920x1080p and it's more than just the ESRAM issue. Your not factoring the timeline difference with MGS5 vs Sniper Elite 3.

Your "Fact" is LOL since Rebellion already addressed this POV with Xbox One being "slightly slower" than PS4. Again, you are not reading Rebellion's statements.

So, you question Rebellion's credibility based on their last-gen console artwork issues? Didn't you know artwork comparative discussions are inherently subjective? PC's hardware benchmark practice = qualitative numbers with common program base, NOT some Bachelor Arts type discussions.

Rebellion has written it's own 3D engine which is more than the majority of the game developers i.e. most license from 3D engine vendors and has bias towards artwork R&D budgets.

I remember prior to release that these dev statements were used by you and other lems to show that the Xbox One is not that far behind the PS4. The issue is that people who know about technology and GPUs are not moved by appeals to authority when we know full well what the specification differences imply. Now here you are trotting out another developer statement in another appeal to authority and it does not work on me because I know what the specs imply, I know that the statement is incomplete/vague and I also know what has happened with past appeals to authority.

It is possible that Rebellion have managed to minimse the differences but there could be a multitude of reasons for it, perhaps the game does not have a very aggressive IQ target so it is easier on the hardware making 60FPS easier to achieve, perhaps the PS4 version has more special effects or enhanced AA over the Xbox One version on top of the frame rate boost, maybe what rebellion say when they use the term 'slightly slower' is more for PR than an actual reflection on reality or could it be a combination of these? Until we have the game and we can compare it we do not know.

Also I have no idea why you jumped on Tormentos's comment when he said that the game did not look that great on PC, he did not even state weather it was due to the artwork or weather it was due to some other aspect of IQ that is not subjective.

Unlike the other devs, Rebellion has stated specific criteria for their claims which is beyond the paper specs. The other game with stated tiling support is Doom 4.

Your "game does not have a very aggressive IQ target" statement didn't factor in Rebellion's stated frame rate drops i.e "50-40" fps for PS4 and Xbox One being "slightly slower" than PS4. "Slightly slower" will be defined when they ship the product.

If Rebellion increases the QI, you'll get more frame rate drops for either consoles.

Last-gen games are influenced by last-gen console hardware.

I can force SSAA on most PC games, but thier artwork would remain the same. This is why the PC has 3rd party texture mod packs.

#73 Posted by blackace (21405 posts) -

@scatteh316: Don't believe everything you read.

#74 Posted by scatteh316 (5022 posts) -

@blackace said:

@scatteh316: Don't believe everything you read.

I pick and choose what to believe based on common sense..... If I deem it to be possible bullshit I ignore it.

#75 Posted by RitsukoBlue (28 posts) -

The bottom line is.. Who ever design the Xbox One ( who picked the parts ) Microsoft should have that person. Drag into the middle of a street and shoot in both kneel caps!!

#76 Edited by btk2k2 (440 posts) -

@ronvalencia said:
@btk2k2 said:

@ronvalencia said:

2. Again, you didn't read Rebellion's statement.

Again,

For PS4: 1920x1080 with 60 fps target and frame rate drops to 50-40 fps.

For Xbox One: 1920x1080 with 60 fps target and Xbox One being slightly slower than PS4 i.e. slightly slower than PS4's 50-40 frame per second rate drop.

My 8870M can render the same effects as my R9-290/R9-290X, but there's a large frame rate difference.

PS4's ability to run TitanFall at 1920x1080 is not the issue.

3. Rebellion already explained why most Xbox One games are less than 1920x1080p and it's more than just the ESRAM issue. Your not factoring the timeline difference with MGS5 vs Sniper Elite 3.

Your "Fact" is LOL since Rebellion already addressed this POV with Xbox One being "slightly slower" than PS4. Again, you are not reading Rebellion's statements.

So, you question Rebellion's credibility based on their last-gen console artwork issues? Didn't you know artwork comparative discussions are inherently subjective? PC's hardware benchmark practice = qualitative numbers with common program base, NOT some Bachelor Arts type discussions.

Rebellion has written it's own 3D engine which is more than the majority of the game developers i.e. most license from 3D engine vendors and has bias towards artwork R&D budgets.

I remember prior to release that these dev statements were used by you and other lems to show that the Xbox One is not that far behind the PS4. The issue is that people who know about technology and GPUs are not moved by appeals to authority when we know full well what the specification differences imply. Now here you are trotting out another developer statement in another appeal to authority and it does not work on me because I know what the specs imply, I know that the statement is incomplete/vague and I also know what has happened with past appeals to authority.

It is possible that Rebellion have managed to minimse the differences but there could be a multitude of reasons for it, perhaps the game does not have a very aggressive IQ target so it is easier on the hardware making 60FPS easier to achieve, perhaps the PS4 version has more special effects or enhanced AA over the Xbox One version on top of the frame rate boost, maybe what rebellion say when they use the term 'slightly slower' is more for PR than an actual reflection on reality or could it be a combination of these? Until we have the game and we can compare it we do not know.

Also I have no idea why you jumped on Tormentos's comment when he said that the game did not look that great on PC, he did not even state weather it was due to the artwork or weather it was due to some other aspect of IQ that is not subjective.

Unlike the other devs, Rebellion has stated specific criteria for their claims which is beyond the paper specs. The other game with stated tiling support is Doom 4.

Your "game does not have a very aggressive IQ target" statement didn't factor in Rebellion's stated frame rate drops i.e "50-40" fps for PS4 and Xbox One being "slightly slower" than PS4. "Slightly slower" will be defined when they ship the product.

If Rebellion increases the QI, you'll get more frame rate drops for either consoles.

Last-gen games are influenced by last-gen console hardware.

I can force SSAA on most PC games, but thier artwork would remain the same. This is why the PC has 3rd party texture mod packs.

What specific statement? "Tiling Tricks"? That is as vague as vague can be and all that alluded to was that 1080p could be achieved, even if the render target was > 32MB if they were to manage the ESRAM in a clever way.

You do realise that you can tank frame rates with other things that graphical settings, complicated physics modelling will tank FPS and show no real IQ benefit although it would make the world more believable. Then there is optimisation where they can turn down some of the very expenses graphical effects with no apparent loss in IQ so they can turn up a few cheaper ones to improve overall IQ and at the same time improve frame rates.

If you were to make a game using Ray Tracing the sacrifices you would have to make in other areas of IQ to get the frame rate playable would more than outweigh the gain Ray Tracing has over other lighting models resulting in a net IQ loss for any given frame rate.

I never talked about last gen games so why do you bring it up?

What does this have to do with my statement or Tormentos's statement? Tormentos said the PC version does not look great, was he talking about a subjective opinion of the artwork or an objective opinion on the use of dynamic lighting, AA or some other aspect of IQ. I have no idea because he has not stated either way.

@ronvalencia said:
@scatteh316 said:

@ronvalencia:

If you think the ROP numbers are flawed then I suggest you tell the DEVELOPER that posted them that he is wrong and you are right.

And all that bullshit you posted below that is completely irrelevant to my points, why did you not address my quries? Instead you just came out with complete carp and tried to change the subject to other areas of the machines subsystems.

The BS is from your post.

On the PC, you haven't factored in PCI-E's bandwidth consumption into GPU card's VRAM.

The consoles has other means to bypass memory bandwidth consumption issues for it's CPU i.e. the direct link between CPU (via FCL type link) and GPU(via IOMMU).

An example of GDDR5's overheads, from https://www.skhynix.com/products/graphics/view.jsp?info.ramKind=26&info.serialNo=H5GQ2H24AFRhttps://www.skhynix.com/products/graphics/view.jsp?info.ramKind=26&info.serialNo=H5GQ2H24AFR

Programmable CAS latency: 5 to 20 tCK

Programmable WRITE latency: 1 to 7 tCK

32ms, auto refresh (16k cycles).

Any overheads will result in less than paper spec numbers.

There's a reason why register level storage uses the fastest known memory storage technology and it's not DDRx/XDRx based memory.

Ron, this seems to contradict your claims that your PC benchmarks of Tomb Raider are valid when using them to show that the Xbox One will not be affected by the low bandwidth or the low ROP count at 1080p.

Secondly all he stated was that the ROP performance figures came from a developer at a developer conference so it is a trustworthy source. It has nothing at all to do with the other stuff you mentioned in your previous post and it has nothing at all to do with what you are talking about here.

Could you ever admit that you were mistaken? Incorrect? Wrong? Is that possible for you or do you have some sort of condition where if someone says you are wrong you have to argue with them until they give up. The thing is though in the next thread discussing hardware you start using what they said because you know it is correct but since it is a new thread you do not have to admit you were wrong. It just seems strange to see you running in circles all the time, changing the subject or resorting to logical fallacies on a regular basis.

#77 Posted by tormentos (20183 posts) -

@blackace said:

@scatteh316: Don't believe everything you read.

Specially if it comes from you who make wrong bold claims..lol

@ronvalencia said:

Unlike the other devs, Rebellion has stated specific criteria for their claims which is beyond the paper specs. The other game with stated tiling support is Doom 4.

Your "game does not have a very aggressive IQ target" statement didn't factor in Rebellion's stated frame rate drops i.e "50-40" fps for PS4 and Xbox One being "slightly slower" than PS4. "Slightly slower" will be defined when they ship the product.

If Rebellion increases the QI, you'll get more frame rate drops for either consoles.

Last-gen games are influenced by last-gen console hardware.

I can force SSAA on most PC games, but thier artwork would remain the same. This is why the PC has 3rd party texture mod packs.

Rebellion claims weren't even that specific,did they claim both games would look exactly the same while running at those frames.? Did they say IQ was the same,and all effects the same without any sacrifices.?

So the claim is 1080p 50 to 40 on PS4 slightly lower on xbox one,you know that means nothing right.?

You can have a game run at 1080p almost the same frames but the image quality been different right.?

For example the 7770 can run Crysis 3 at 1080p low detail with textures on medium at 48FPS average,but if you set the settings to high and textures on high the 7770 runs at 25 FPS average,so you see having 1080p on both version at close frames mean nothing the PS4 version could be mid to high settings while the xbox one version is low to mid settings.

Once again if Rebellions fail you will eat those quotes with some rich crow..lol

#78 Posted by Solid_Max13 (3548 posts) -

@btk2k2: unfortunately he won't he's going to come back day your post is BS and make more rebellion claims with links, though you and tormentos pretty much asked and brought up legit stats

#79 Edited by ronvalencia (15573 posts) -

@btk2k2 said:
@ronvalencia said:

Unlike the other devs, Rebellion has stated specific criteria for their claims which is beyond the paper specs. The other game with stated tiling support is Doom 4.

Your "game does not have a very aggressive IQ target" statement didn't factor in Rebellion's stated frame rate drops i.e "50-40" fps for PS4 and Xbox One being "slightly slower" than PS4. "Slightly slower" will be defined when they ship the product.

If Rebellion increases the QI, you'll get more frame rate drops for either consoles.

Last-gen games are influenced by last-gen console hardware.

I can force SSAA on most PC games, but thier artwork would remain the same. This is why the PC has 3rd party texture mod packs.

What specific statement? "Tiling Tricks"? That is as vague as vague can be and all that alluded to was that 1080p could be achieved, even if the render target was > 32MB if they were to manage the ESRAM in a clever way.

You do realise that you can tank frame rates with other things that graphical settings, complicated physics modelling will tank FPS and show no real IQ benefit although it would make the world more believable. Then there is optimisation where they can turn down some of the very expenses graphical effects with no apparent loss in IQ so they can turn up a few cheaper ones to improve overall IQ and at the same time improve frame rates.

If you were to make a game using Ray Tracing the sacrifices you would have to make in other areas of IQ to get the frame rate playable would more than outweigh the gain Ray Tracing has over other lighting models resulting in a net IQ loss for any given frame rate.

I never talked about last gen games so why do you bring it up?

What does this have to do with my statement or Tormentos's statement? Tormentos said the PC version does not look great, was he talking about a subjective opinion of the artwork or an objective opinion on the use of dynamic lighting, AA or some other aspect of IQ. I have no idea because he has not stated either way.

--------------------------------

Ron, this seems to contradict your claims that your PC benchmarks of Tomb Raider are valid when using them to show that the Xbox One will not be affected by the low bandwidth or the low ROP count at 1080p.

Secondly all he stated was that the ROP performance figures came from a developer at a developer conference so it is a trustworthy source. It has nothing at all to do with the other stuff you mentioned in your previous post and it has nothing at all to do with what you are talking about here.

Could you ever admit that you were mistaken? Incorrect? Wrong? Is that possible for you or do you have some sort of condition where if someone says you are wrong you have to argue with them until they give up. The thing is though in the next thread discussing hardware you start using what they said because you know it is correct but since it is a new thread you do not have to admit you were wrong. It just seems strange to see you running in circles all the time, changing the subject or resorting to logical fallacies on a regular basis.

Have you realised a 32 ROPS > 16 ROPS doesn't automatically equals 2X the frame rate?

"You do realise that you can tank frame rates with other things that graphical settings, complicated physics modelling will tank FPS and show no real IQ benefit although it would make the world more believable"

My point with 16 ROPS(17 ROPS effective at 800Mhz) vs 32 ROPS is color ROPS is NOT the only GPU bandwidth consumer. Your pure color ROPS arguments removes GPU's TMUs, Z/Depth-ROPS, MSAA hardware from the picture. Do you enjoy pure color fill rate without depth/Z benchmarks?

For R7-265 vs R7-260, read http://www.anandtech.com/show/7754/the-amd-radeon-r7-265-r7-260-review-feat-sapphire-asus/8

My Tomb Raider 2013 benchmarks with 69 GB/s memory bandwidth (which gimps R9-290X's 64 ROPS) still renders at 1920x1080p at ~35fps. It shows 1920x1080p render.

The difference between Rebellion (Sniper Elite 2/3, AMD Gaming Evolved title on the PC) vs Avalanche Studios (Just Cause 2, NVIDIA 'The Way It's Meant To Be Played' title on the PC)

1. Rebellion has specifically addressed 1920x1080p rendering argument.

2. Avalanche only stated ROPS count X being less then memory bandwidth for certain situations and nothing specific on 1920x1080p.

Scatteh316 added 1920x1080p on Avalanche's statements.

Rebellion claimed it can use ESRAM's higher memory bandwidth for 1920x1080p which flies in the face of 16 ROPS is not enough for 1920x1080p arguments. Rebellion didn't state 16 ROPS being a big issue for ESRAM's higher memory bandwidth and 1920x1080p rendering.

Rebellion's statement > Scatteh316's interpretation.

#80 Edited by btk2k2 (440 posts) -

@ronvalencia said:

Have you realised a 32 ROPS > 16 ROPS doesn't automatically equals 2X the frame rate?

For R7-265 vs R7-260, read http://www.anandtech.com/show/7754/the-amd-radeon-r7-265-r7-260-review-feat-sapphire-asus/8

My Tomb Raider 2013 benchmarks with 69 GB/s memory bandwidth (which gimps R9-290X's 64 ROPS) still renders at 1920x1080p at ~35fps.

When did I ever say that? Go on, find one instance in my entire post history where I have made that statement.

It shows that at a certain performance level more rops and more bandwidth are a larger performance differentiator than shaders, tmus and triangle setup rates are.

The Xbox One does not just have low ROP and low bandwidth numbers, the whole GPU is low end (from a gaming perspective).

#81 Posted by SecretPolice (23551 posts) -

ROP's can now RIP since once again, MS reset the table this gen. :P

#82 Edited by Infinity8378 (183 posts) -

@ronvalencia said:

@tormentos said:

Yes they are very specific ESRAM is to small and in order to do 1080p they need to do tricks and tilling,that mean cutting corners we all know it,it is you who don't to admit it because well you are to hard headed.

Tomb Raider shatter all my benchmarks the gap is way wider than the charts i use to post from anandtech so yeah i was RIGHT you weren't and ESRAM like i claim,like some developers claims,like leaks claims and even like Sony's own Cerny claim is a pain and add complexity to hardware,is not a magic wand that will magically make the crappy GPU inside the xbox one perform like something much stronger.

So the xbox one has several things against it.

Is 6 CU handicap vs the PS4 for more than 500Gflops difference.

Is has a trouble some and complex memory system.

It has DX which has never been better than Sony tools for getting the most out of the hardware.

It doesn't have true HSA it has something like it and doesn't has hUMA either.

Remember how people use to claim that the xbox one had a CPU advantage because GDDR5 wasn't good for CPU.? How did that turn out.? Test even show the PS4 having faster CPU even while been 100mhz slower.

What people like you don't take into account is that the xbox one has 3 OS,Running a guess OS on hyper - V has a cost to CPU and memory as well,from 9% to 12% CPU over head,things like this add allot.

I don't know if they use Rebellion criteria and neither do you,all i know is that the game is 1080p when games like AC are 900p on xbox one,and games like thief are 900p as well,so you take a guess if they where or not using Rebellions ways,i just hope that Rebellion would not have to do like Kujima did and had to eat their words,i remember how Kujima claimed the difference in MGS5 wasn't great.

The difference in power between the PS4 and Xbox One is small and nothing to worry about, Hideo Kojima has told 4Gamer.

Kojima was at E3 with Metal Gear Solid 5, revealing the game for Xbox One and PS4.

"The difference is small, and I don't really need to worry about it," he said, suggesting versions for Xbox One and PS4 won't be dramatically different.

http://www.videogamer.com/xboxone/metal_gear_solid_5_the_phantom_pain/news/ps4_and_xbox_one_power_difference_is_minimal_says_kojima.html

Remember this ^^^ because i do. Kujima saying the difference was small...

So is 720p with no dynamic clouds vs 1080p with dynamic clouds a small difference to you.?

I just hope rebellions doesn't have to eat their words like Kujima did,because it will be a sad day for you here.

Oh and who can forget the legendary test done by Richart MS lover Leadbetter,where he stated that the xbox one GPU resemble a 7790 but they pic a 7850 for the test and a 7870XT for the PS4 test,all with the intended purpose of helping the xbox one case.

And what was the result.? They test Tomb Raider and the difference was like 12 to 13 frames per second at the same quality in everything.

The PS4 commands up to 30 FPS advantage over the xbox one version,on average is 20 FPS faster, that is an up to 100% gap in frames.

But wait the xbox one version has effects with lower quality,effects at half the resolution,lower quality textures in some places,and 900p cut scenes,now this is not small by any way,is way way waaaaaaaaayyyyyyyyy bigger than the gap DF showed for Tomb Raider.

DF could have use a 7790 to represent the xbox one and downclock it to 1.31TF and use a 7850 and over clock it to 1.84 TF,it would have not been perfect but it would have been way better than what they did,because regardless of the downgrade in speed 16 CU at 600mhz will work better than 12 CU at 857 mhz,they chose a GPU that had 4 more CU than the xbox one,and one with more 6 CU than the PS4 as well.

Not only that the PS4 also receive the worse part because down clocking the 7870 XT to 600mhz is a freaking huge drop the 7870TX is almost 1ghz,so going down to 600mhz means almost a 40% drop,when the 7850 is 860mhz lose just 260mhz from the downgrade the 7850 XT almost 400mhz drop.

The test was a joke and the actual Tomb Raider game showed that quite easy.

7790 would not reflect X1's GPU at it's upper limit (with tiling tricks) since 7790 doesn't have X1's higher memory bandwidth for it's memory bandwidth consumers e.g. color ROPS, MSAA, Z-ROPS, TMUs, compute and 'etc;

Real life games has read and write ROPS not pure write operations. Microsoft's fill rate example combines Z-ROP and color-ROP.

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth

"render target is 32bpp [bits per pixel]" = color ROPS.

"depth/stencil surface is 32bpp with Z [depth]" = Z-ROPS aka depth/stencil ROP. The letter "Z" denote depth data i.e. it's a 3D workload.

If you think color ROP is the only game in town, you might as well asked Sony to remove depth/stencil ROP support.

We are not even factoring TMU's workloads on top of ROPS''s memory consumption.

------------------

Unlike Kojima, Rebellion already claimed specific results i.e.

1. PS4 = 1920 x 1080p, 60 fps target with 50-40 fps drop.

2. X1 = 1920 x 1080p, 60 fps target with slightly slower than 50-40 fps drop.

3. Rebellion already stated the specific criteria for their claim, while Kojima was vague i.e. Kojima didn't factor in ESRAM's best practise programming requirements.

4. Rebellion is a known AMD Gaming Evolved+Mantle developer e.g. more experience with Radeon HD GPUs and they have the wiliness to do the hard programming work i.e. employ the "tiling tricks" for X1. Also, X1's new SDK helps with Rebellion's claim. You are using older game examples that are outside of Rebellion's criteria for their claim.

A render target of 32bpp is whoefully inefficient using spherical cords and not waste full rgba saves you tons. spherical unit vectors can be used to represent depth stencil and render targets and the dot product of them is simple. From that you can find the cross product which is geometrically defined. that leaves you with ~8bit u 8bit v and 8 bit u, 8 bit v on the stencil for optimization only take the forward facing angles a fragmentally batch by 1/2 compressed 4 to 1 that makes it 7bit by 7bit by 6bit by 6bit holding 8bit value. that gives you fast Edram speeds for solving for the dot/cross product take of axis advantage of every 90 degrees symmetry in dot product and only consumes ~4MB out of 32 with enough room for whatever else(20MB). The compute units on XBone or the compressed TEV textures/CU on Wii U can handle the booleans. This is all TMUs or ROPs are. The problem with PS4 is they didn't include large enough shared pools of edram which is the fastest form of memory in the world. They broke it up like a hydra with too many heads. It's going to be hard to get them all to work together. So given the 560-1000GB/s speed of edram you could at max calculate the dot product the necessary 2-3 times every 180/512 degrees instead of the usual 180/2048. 248billion times per second ~19 billion times or 19GTexels per second rounded with 16*9GTexels/second+19GTexels/second. The total decompression takes 2*4.194304/550=0.766% of the gpu that means it takes 5.144032921810699588477366255144e-10 to decompress 4 to 1 less than the time of loading header files and additional file size on PS4. 5.6179775280898876404494382022472e-12 Header acceess time. change in Total File acces time 0.00904838615730337078651685393258 ms. Compression means you have to access the files individually which is what smaller pools of ram accel at. So a proper render target would be 16bpp and 16bpp which renders only infront of the screen and cuts that in half to reach 7bit per 90 degree spherical render targets. x=sin(u)cos(v) y=sin(u)sin(v) z=cos(u) allows two variables to define a vector in space. if u and v are 7 and you use hardware to any small amount of render behind the camera and edram to quickly solve rendering half the front of the screen
at all times you save time. 4 to 1 decompression turns 8bit into 4* 8bit in ram and allows you to per pixel render more effectively.

#84 Posted by Infinity8378 (183 posts) -

@ronvalencia said:

@tormentos said:

Yes they are very specific ESRAM is to small and in order to do 1080p they need to do tricks and tilling,that mean cutting corners we all know it,it is you who don't to admit it because well you are to hard headed.

Tomb Raider shatter all my benchmarks the gap is way wider than the charts i use to post from anandtech so yeah i was RIGHT you weren't and ESRAM like i claim,like some developers claims,like leaks claims and even like Sony's own Cerny claim is a pain and add complexity to hardware,is not a magic wand that will magically make the crappy GPU inside the xbox one perform like something much stronger.

So the xbox one has several things against it.

Is 6 CU handicap vs the PS4 for more than 500Gflops difference.

Is has a trouble some and complex memory system.

It has DX which has never been better than Sony tools for getting the most out of the hardware.

It doesn't have true HSA it has something like it and doesn't has hUMA either.

Remember how people use to claim that the xbox one had a CPU advantage because GDDR5 wasn't good for CPU.? How did that turn out.? Test even show the PS4 having faster CPU even while been 100mhz slower.

What people like you don't take into account is that the xbox one has 3 OS,Running a guess OS on hyper - V has a cost to CPU and memory as well,from 9% to 12% CPU over head,things like this add allot.

I don't know if they use Rebellion criteria and neither do you,all i know is that the game is 1080p when games like AC are 900p on xbox one,and games like thief are 900p as well,so you take a guess if they where or not using Rebellions ways,i just hope that Rebellion would not have to do like Kujima did and had to eat their words,i remember how Kujima claimed the difference in MGS5 wasn't great.

The difference in power between the PS4 and Xbox One is small and nothing to worry about, Hideo Kojima has told 4Gamer.

Kojima was at E3 with Metal Gear Solid 5, revealing the game for Xbox One and PS4.

"The difference is small, and I don't really need to worry about it," he said, suggesting versions for Xbox One and PS4 won't be dramatically different.

http://www.videogamer.com/xboxone/metal_gear_solid_5_the_phantom_pain/news/ps4_and_xbox_one_power_difference_is_minimal_says_kojima.html

Remember this ^^^ because i do. Kujima saying the difference was small...

So is 720p with no dynamic clouds vs 1080p with dynamic clouds a small difference to you.?

I just hope rebellions doesn't have to eat their words like Kujima did,because it will be a sad day for you here.

Oh and who can forget the legendary test done by Richart MS lover Leadbetter,where he stated that the xbox one GPU resemble a 7790 but they pic a 7850 for the test and a 7870XT for the PS4 test,all with the intended purpose of helping the xbox one case.

And what was the result.? They test Tomb Raider and the difference was like 12 to 13 frames per second at the same quality in everything.

The PS4 commands up to 30 FPS advantage over the xbox one version,on average is 20 FPS faster, that is an up to 100% gap in frames.

But wait the xbox one version has effects with lower quality,effects at half the resolution,lower quality textures in some places,and 900p cut scenes,now this is not small by any way,is way way waaaaaaaaayyyyyyyyy bigger than the gap DF showed for Tomb Raider.

DF could have use a 7790 to represent the xbox one and downclock it to 1.31TF and use a 7850 and over clock it to 1.84 TF,it would have not been perfect but it would have been way better than what they did,because regardless of the downgrade in speed 16 CU at 600mhz will work better than 12 CU at 857 mhz,they chose a GPU that had 4 more CU than the xbox one,and one with more 6 CU than the PS4 as well.

Not only that the PS4 also receive the worse part because down clocking the 7870 XT to 600mhz is a freaking huge drop the 7870TX is almost 1ghz,so going down to 600mhz means almost a 40% drop,when the 7850 is 860mhz lose just 260mhz from the downgrade the 7850 XT almost 400mhz drop.

The test was a joke and the actual Tomb Raider game showed that quite easy.

7790 would not reflect X1's GPU at it's upper limit (with tiling tricks) since 7790 doesn't have X1's higher memory bandwidth for it's memory bandwidth consumers e.g. color ROPS, MSAA, Z-ROPS, TMUs, compute and 'etc;

Real life games has read and write ROPS not pure write operations. Microsoft's fill rate example combines Z-ROP and color-ROP.

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth

"render target is 32bpp [bits per pixel]" = color ROPS.

"depth/stencil surface is 32bpp with Z [depth]" = Z-ROPS aka depth/stencil ROP. The letter "Z" denote depth data i.e. it's a 3D workload.

If you think color ROP is the only game in town, you might as well asked Sony to remove depth/stencil ROP support.

We are not even factoring TMU's workloads on top of ROPS''s memory consumption.

------------------

Unlike Kojima, Rebellion already claimed specific results i.e.

1. PS4 = 1920 x 1080p, 60 fps target with 50-40 fps drop.

2. X1 = 1920 x 1080p, 60 fps target with slightly slower than 50-40 fps drop.

3. Rebellion already stated the specific criteria for their claim, while Kojima was vague i.e. Kojima didn't factor in ESRAM's best practise programming requirements.

4. Rebellion is a known AMD Gaming Evolved+Mantle developer e.g. more experience with Radeon HD GPUs and they have the wiliness to do the hard programming work i.e. employ the "tiling tricks" for X1. Also, X1's new SDK helps with Rebellion's claim. You are using older game examples that are outside of Rebellion's criteria for their claim.

A render target of 32bpp is whoefully inefficient using spherical cords and not waste full rgba saves you tons. spherical unit vectors can be used to represent depth stencil and render targets and the dot product of them is simple. From that you can find the cross product which is geometrically defined. that leaves you with ~8bit u 8bit v and 8 bit u, 8 bit v on the stencil for optimization only take the forward facing angles a fragmentally batch by 1/2 compressed 4 to 1 that makes it 7bit by 7bit by 6bit by 6bit holding 8bit value. that gives you fast Edram speeds for solving for the dot/cross product take of axis advantage of every 90 degrees symmetry in dot product and only consumes 12MB out of 32 with enough room for whatever else(20MB). The compute units on XBone or the compressed TEV textures/CU on Wii U can handle the booleans. This is all TMUs or ROPs are. The problem with PS4 is they didn't include large enough shared pools of edram which is the fastest form of memory in the world. They broke it up like a hydra with too many heads. It's going to be hard to get them all to work together. So given the 560-1000GB/s speed of edram you could at max calculate the dot product the necessary 2-3 times every 180/512 degrees instead of the usual 180/2048. 248billion times per second ~19 billion times or 19GTexels per second rounded with 16*9GTexels/second+19GTexels/second. The total decompression takes 2*4.194304/550=0.766% of the gpu that means it takes 5.144032921810699588477366255144e-10 to decompress 4 to 1 less than the time of loading header files and additional file size on PS4. 5.6179775280898876404494382022472e-12 Header acceess time. change in Total File acces time 0.00904838615730337078651685393258 s. Compression means you have to access the files individually which is what smaller pools of ram accel at. So a proper render target would be 16bpp and 16bpp which renders only infront of the screen and cuts that in half to reach 7bit per 90 degree spherical render targets.

Oh yeah so the pixel fillrate for wii u would be 24GTexels/s(1/4)[4MB] 40.4 GTexels/s(1/2)[8MB]

and for Xbox One 48.3[4MB] 56.2[8MB]

#85 Posted by Silent-Assasin7 (1496 posts) -

Consolites are talking about ROPs? Lmao. How many ROPs does it take to get a good game on the PS4?