Nothing specific on 1920x1080p. All it states is color-ROPS are less than memory bandwidth. You haven't factored Z-ROPS which is 4X over ROPS count.
Unlike TC's post, Rebellion's statement was specific for 1920x1080p native output.
R9-290's 64 ROPS at 947Mhz would be still "ROPS bound" with RGBA8(integer) against 320 GB/s memory bandwidth e.g. the math: 947Mhz x 64 x 4 byte = 242 GB/s.
The big flaw with their math is the hardware color decompression/compression factor. There's more to it than the simple math i.e. the GPU has to fit in compute, TMU, Z-ROP, color-ROP, 'etc' workloads into the finite memory bandwidth.
JIT decompress and compress hardware will throw-off any simple memory bandwidth vs ROPS math. From http://www.bit-tech.net/hardware/graphics/2007/05/16/r600_ati_radeon_hd_2900_xt/8
If you notice, AMD's ROPS unit has multiple read and write ports. TC's post focused on the Blend section i.e. the color-ROPS. AMD increased the ratio between Z-ROPS(depth/stencil ROP) vs color-ROPS (Blend ROPS) i.e. 2:1 to 4:1.
Again, I can gimp my R9-290 with 69 GB/s memory bandwidth and still deliver 1920x1080p e.g. Tomb Raider 2013 at 35.5 fps and Ultimate settings (with TressFX). I can increase my fps with shadow resolution set to normal and texture quality set to high.
ROP bound period a developers says it move on..lol
I love how you want to give credit only to arguments that serve you best even selectively quoting sources,so the part you like is true like ROP not been a problems like DF claim,but then completely going into denial on things like Bonaire been inside the xbox one which DF also claim.
You can't have it both ways either DF is good for both claims or it isn't for any..
@ronvalencia: you clearly understand tech but the issue is what many devs have stated and how they utilize the tech which I thus case 1080p is not viable right now on the xbone and won't be for some time until they can utilize the hardware properly tomb raider was a last gen game and isn't hard to get to 1080p but the newer games is where you'll see the drop
The developer he quote Rebellion say it ESRAM is to small and they have to do tricks and use tilling,tricks mean changing things lowering assets,turning off effects or doing them at half resolution like Tomb Raider case and or decreasing frames or reducing resolution..
My argument is back up by game performance so far his arguments are back up by nothing,i have been quoting here benchmarks from Anandtech for months,telling Ronvalencia what the difference would be,and he refuse to admit it,he claims the 7770 didn't represent the xbox one cache,ESRAM,bandwidth and bus,so in his eyes all that would change the world,in reality it didn't and the xbox one acted even worse than a 7770 which actually can hit 1080p in quite allot of games under certain quality.
Hell the gap i use to post here from the 7850 to the 7770 was even smaller,that the gap between the xbox one and PS4 games like Tomb Raider running at the same quality on the 7770 don't run as much as 30 FPS slower while having lower resolution effects and 900p cut scenes and having worse textures like the xbox one does vs the PS4,the real life gap is what talks.
Sorry, Rebellion's statement is specific for 1920x1080p rendering issue, while TC's source information is just "ROPS is lower than memory bandwidth".
TC's asserted the claim for 16 ROPS at 853Mhz is the main gimp factor for 1920x1080p rendering.
PC's 7770 SKU will stay in 72 GB/s limitation with zero option for ESRAM boost.
Does Tomb Raider DE Xbox One utilize Rebellion's specific criteria for their 1920x1080p results?
DF also has down-clocked 7870 XT at 600Mhz = PS4 and down-clocked 7850 at 600Mhz= X1. Read http://www.eurogamer.net/articles/digitalfoundry-can-xbox-one-multi-platform-games-compete-with-ps4
Unlike DF's 7870 XT at 600Mhz vs 7850 at 600Mhz, the prototype 7850 maintains 1.32 TFLOPS = 12 CU at 860Mhz clock speed. DF has approximated X1's ESRAM (with tiling tricks) with PC SKUs.
Note that DDRx/GDDRx has overheads i.e. data integrity refresh cycles. Any paper spec memory bandwidth math will not hold true.
Oddworld claims 172 GB/s for PS4's memory bandwidth while TC's information claims 176GB/s (the number was derived from pure paper spec).
Since GDDR5 is not a super low latency memory, 176 GB/s claim for PS4 (or any DDR type memory) is a joke. No chip vendor has GDDR5 as it's register storage memory type. GDDR5 will not get paper spec memory bandwidth.