Has Navi and Ryzen 3000 got your hopes up for next gen consoles?

  • 116 results
  • 1
  • 2
  • 3
Avatar image for ronvalencia
#51 Edited by ronvalencia (28230 posts) -

@rdnav2 said:
@ronvalencia said:

https://forum.beyond3d.com/threads/next-generation-hardware-speculation-with-a-technical-spin-post-e3-2019.61245/page-8

Alright broke out the measurement taping and found out some things about Navi and the Anaconda in terms of sizes on 7nm.

5700:

GDDR6 phy controller: 4.5mm x 8

Dual CU: 3.37mm x 20

4 ROP cluster: .55mm x 16

L1+L2+ACE+Gemotry processor+empty buffer spaces + etc: 139mm

Now Anaconda:

A rougher estimate using the 12x14mm GDDR6 chips next to the SOC.

370 mm-390 mm.

It's a bit bigger than the 1X SOC for sure.

If we use the figure of 380mm,

75mm for CPU

45mm for 10 GDDR6 controllers

8.8mm for ROPs

140mm for buses, caches, ACE, geometry processors, shape etc. I might be over estimating this part as the 5700 seems to have lots of "empty" areas.

We have ~110mm left for CUs + RT hardware. There is enough there for ~30 dual CUs and RT extensions.

Conclusion:

The Anaconda SOC is around the minimum size you need to fit the maximum Navi GPU and Zen2 cores.

I expect Anaconda to have a minimum of 48 CUs if the secret sauce is extra heavy or 60CUs if the sauce is light.

370 mm2 to 390 mm2 estimated.

60 CU would be crazy.

Anything above 48 CU’s would be gravy.

Im hoping for at least 52 CU if the “sauce is light”

If AMD/MS didn't increase Shader Engine count in relation to Compute unit and memory controller count increase, it will run into the same bottleneck problem as Vega II.

NVIDIA's 6 GPC units design at 11 to 17 TFLOPS compute shader range is not a paper weight.

NVIDIA's GPC ~= AMD's Shader Engine (SE) terminology.

Avatar image for ronvalencia
#52 Edited by ronvalencia (28230 posts) -

@rdnav2 said:

Power consumption doesn’t look like a problem

https://youtu.be/7_GvOe1_UKs

Microsoft employs automatic under-voltage VRM per APU silicon quality with X1X.

https://www.tested.com/tech/825498-xbox-one-x-and-hovis-method/

One of the more interesting in Xbox One's technical design is the Hovis Method, a hardware design process that customizes the amount of power needed for each individual One X console.

The processors put into video game consoles are subject to the same imperfections as desktop and laptop processors. However, Microsoft, Sony, and Nintendo typically make their consoles to a single, one size fits all specification and the hardware isn't as flexible as a desktop computer. One of these specifications is power consumption. Measure the power draw of any original Xbox One console and you'll get the same value. The Xbox One X bucks this trend though, and Microsoft's implementation of the Hovis Method means that different One X consoles can consume significantly different amounts of power. The variance in power draw also has implications for how the cooling system performs.

Rather than have a single power profile for all consoles manufactured, which would result in some generating excess heat by taking more power than components require, each Scorpio Engine processor has a custom power profile programmed onto the motherboard it's paired with in the factory. This process is referred to as the Hovis Method, named after Xbox engineer Bill Hovis. This means that Microsoft is able to net better yields of chips as opposed to a standard building process. Every system is highly power efficient, and the processors that require just a little more extra juice are now usable. For the consumer, this means that any two Xbox One X consoles quite literally aren't the same. Yes, they will all of course hit the same clock frequencies, data speeds, and everything else needed to run games identically. Some consoles however will draw more power than others, including my own.

I put my Xbox One X Scorpio Edition through a battery of tests, pushing the console to its limits, and recorded its power draw as well as noise produced. First let's talk power consumption. One of the peak power draw scenarios I observed was playing Gears of War 4 at 4K. (The most taxing section of the game I observed was Act 3 Chapter 1, which has a lot of foliage and volumetric lighting.) This AAA first party shooter had the console pulling power as high as 189 W. This is 14 W more than what Digital Foundry recorded, and 15 W more than GameSpot when playing the same game. In fact, my power draw for Gears 4 was regularly higher than Digital Foundry's 175 W. Running the game in 1080p mode I recorded a peak of 153 W, higher than GameSpot's 144 W and significantly higher than the 128 W from Digital Foundry. Even when idling on the dashboard my console seems to draw more power; 52 W compared to Digital Foundry's 50 W. I don't know the specifics of Digital Foundry's, GameSpot's, or anyone else's testing methods.

AMD's RX 5700 XT doesn't have this feature.

MS's approach with X1X enabled it's GPU design to reach 6 TFLOPS alternative to PC desktop's RX 580 while X1X consumes less power.

X1X GPU's base clock speed is 93 percent of RX-580's base clock speed.

RX 5700 XT is nearly duplicating RX 580's TDP behavior.

((93 percent x 1605 Mhz) x 15 percent TSMC 7nm+ improvement) x 44 CU yields ~9.7 TFLOPS

((93 percent x 1605 Mhz) x 15 percent TSMC 7nm+ improvement) x 48 CU yields ~10.5 TFLOPS

RX 5700 XT has 9.75 TFLOPS.

Avatar image for ronvalencia
#53 Edited by ronvalencia (28230 posts) -

@KBFloYd said:

1080TI?

lol you wish

shit will be a 2060 super. some crap like that.

cpu will be underclock to save heat.

4k 30fps medium settings or 1800P checkerboard4k 30fps medium settings/high settings

1400p checkerboard 4k 60fps. high settings

X1X has "Variable Rate Shading" like feature. https://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Andrew Goossen tells us that the GPU supports extensions that allow depth and ID buffers to be efficiently rendered at full native resolution, while colour buffers can be rendered at half resolution with full pixel shader efficiency. Based on conversations last year with Mark Cerny, there is some commonality in approach here with some of the aspects of PlayStation 4 Pro's design, but we can expect some variation in customisations - despite both working with AMD, we're reliably informed that neither Sony or Microsoft are at all aware of each other's designs before they are publicly unveiled.

Depth buffer deals with geometry render buffer.

Microsoft's "Variable Rate Shading" approach is better than Sony's hardware checkerboard resolve on PS4 Pro.

NVIDIA Turing also has "Variable Rate Shading" feature https://developer.nvidia.com/vrworks/graphics/variablerateshading

"Variable Rate Shading" (VRS) would be one of the factors that enabled X1X to jump 2X pixel resolution over PS4 Pro (increase TFLOPS and increase resolution scaled from PS4) and VRS trick is hard to pixel count by normal Digital Foundry methods due to geometry edges are rendered at native resolution.

Next year's "RDNA 2" includes both "Variable Rate Shading" and hardware accelerated DXR.

X1X hardware may not directly support checkerboard like PS4 Pro's hardware checkerboard resolve feature, hence X1X may software emulate checkerboard resolve feature.

https://www.guru3d.com/articles_pages/palit_geforce_rtx_2060_super_jetstream_review,13.html

RX 5700 XT rivaling GTX 1080 TI with NVIDIA sponsored game i.e. Metro Exodus.

RX 5700 XT still has weakness with heavy geometry, but it's better than the last GCN based RX Vega II

From http://benchmark.finalfantasyxv.com/result/

At standard quality 1440p

Less geometry load with standard quality.

-----

Loading Video...

In terms of faking 4K being close to native 4K, NAVI's 1800p + Radeon Image Sharpening beats NVIDIA's 1440p + DLSS

Avatar image for ronvalencia
#54 Edited by ronvalencia (28230 posts) -

@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

https://www.techpowerup.com/review/amd-radeon-rx-5700/29.html

At 4K resolution, RX 5700's performance per watt is somewhere in RTX 2060 to RTX 2070 Super.

Temperature values are dependent on cooling solution, hence flawed argument via this method.

----------

https://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020-6.html

RX-580 (~6.1 TFLOPS)'s actual typical gaming power consumption is 209 watts, hence it exceeded AMD's 185 watts TDP claim.

In terms of average power consumption, RX-580 is similar to RX-5700 XT's 219 watts

https://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html

RX 480's real power consumption

Running Metro Last Light, RX 480 (~5.8 TFLOPS) has 164 watts power consumption which exceeds AMD's 150 watts TDP claims.

X1X GPU consumes about 155 watts with 6 TFLOPS + 2 MB render cache + four extra GDDR5 memory modules, hence extra R&D time delivered X1X GPU rivaled or beat (4K resolution) PC desktop's RX 580 while consuming less power.

MS has use Vega's perf/watt silicon maturity for X1X's GPU.

In terms of silicon maturity, X1X is based on Vega i.e.

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.
  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

Based on history and MS's hardware accelerated ray-tracing claim, Xbox Scarlet would be using "RDNA 2" perf/watt improvements.

I don't support NAVI 60 CU for game consoles.

Avatar image for Litchie
#55 Posted by Litchie (24316 posts) -

I expect the new consoles to focus a lot on buzzwords like 4K to sell to gullible people, instead of focusing on good performance. I have no plans to get a PS5 or NextBox. Consoles are pretty awful nowadays, unless they're made by Nintendo who actually cares about how their games run.

Avatar image for drlostrib
#56 Posted by DrLostRib (5059 posts) -

no it just makes me want to upgrade my PC

Avatar image for ronvalencia
#57 Edited by ronvalencia (28230 posts) -

@rdnav2:

Loading Video...

Ryzen 7 3700X vs Core i7-9700K

Avatar image for rdnav2
#58 Posted by RDNAv2 (35 posts) -

@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

Why so hostile? You’re probably one of those guys who was positive Navi was GCN. You probably thought a next gen Console with Vega 64 performance was impossible.

Am I wrong?

And FYI 5700 XT draws more power than 5700 because it’s clocked higher..

Avatar image for ronvalencia
#59 Edited by ronvalencia (28230 posts) -

@rdnav2 said:
@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

Why so hostile? You’re probably one of those guys who was positive Navi was GCN. You probably thought a next gen Console with Vega 64 performance was impossible.

Am I wrong?

And FYI 5700 XT draws more power than 5700 because it’s clocked higher..

CU increase without corresponding prim unit increase creates performance bottlenecks.

Vega 56 at 1710 Mhz (12 TFLOPS) already beats Strix Vega 64 at 1590Mhz (13 TFLOPS) when higher clock speed improves raster hardware.

RX 5700 XT has Vega II's raster power (four prim units, 64 ROPS) with improved back face culling (8 triangle input, 4 triangle output) and 80 CU equivalent texture filter processing. Reducing CU count to 40 reduces power consumption.

40 CU / 60 CU = 66.67 percent

66.67 percent x Vega II's 295 watts = 196.67 watts, hence RX 5700 XT follows Vega II's silicon maturity with 40 CU scaling.

RDNA's wave32 with single clock cycle latency and wave64 two clock cycle latency reduces the need for GCN's latency hiding software optimizations which needs to hide GCN wave64 four clock cycle latency.

RX-5700 has two shader engines with four prim units with 256 bit bus

7870 has has two shader engines with two prim units with 256 bit bus. 7870 was later scaled into Hawaii XT R9-290X which has four shader engines with four prim units (four input with four output).

RX-5700 XT is setup for another Hawaii XT style scaling.

The next game console speculation would be

1. NAVI with dual shader engines and 256 bit bus.

2. NAVI with dual shader engines and 320bit/384 bit bus.

3. NAVI with three shader engines and 384 bit bus.

CU can scale independently.

PC could have NAVI with four shader engines and 512 bit bus e.g. RX-5900 XT.

Scaling shader engine is important for raster power which is the main framework strength for GPUs like RTX 2080 Ti and RTX 2080.

Avatar image for Grey_Eyed_Elf
#60 Posted by Grey_Eyed_Elf (6493 posts) -

@rdnav2 said:
@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

Why so hostile? You’re probably one of those guys who was positive Navi was GCN. You probably thought a next gen Console with Vega 64 performance was impossible.

Am I wrong?

And FYI 5700 XT draws more power than 5700 because it’s clocked higher..

It is GCN, its not a new architecture just a heavily modified one.

As for performance all we can do is speculate based on KNOWN information and prior entries into that architecture or road map that AMD had set out.

No one knew till last minute that AMD would modify GCN to that degree before hand.

It draws more power because of the combination of clocks and CU count along with voltages.

Again to touch on the speculation part... We still have no idea what the next generation of consoles will have in terms of CU from Navi all we have to base any speculation on is the TDP of desktop hardware with a 20-30% adjustment for console optimisation involved when it comes to power usage of the SOC... Which leaves us with a little to no chance of next generation consoles surpassing XT performance and CU counts especially when you count in the fact that any hardware accelerated ray tracing and its impact of total power usage it may have.

Seeing as you are named after this architecture I presume I am completely wasting my time typing this to a alt.

Avatar image for i_p_daily
#61 Posted by I_P_Daily (12527 posts) -

What games do Navi & Ryzen make again?

Avatar image for rdnav2
#62 Posted by RDNAv2 (35 posts) -

@Grey_Eyed_Elf said:
@rdnav2 said:
@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

Why so hostile? You’re probably one of those guys who was positive Navi was GCN. You probably thought a next gen Console with Vega 64 performance was impossible.

Am I wrong?

And FYI 5700 XT draws more power than 5700 because it’s clocked higher..

It is GCN, its not a new architecture just a heavily modified one.

As for performance all we can do is speculate based on KNOWN information and prior entries into that architecture or road map that AMD had set out.

No one knew till last minute that AMD would modify GCN to that degree before hand.

It draws more power because of the combination of clocks and CU count along with voltages.

Again to touch on the speculation part... We still have no idea what the next generation of consoles will have in terms of CU from Navi all we have to base any speculation on is the TDP of desktop hardware with a 20-30% adjustment for console optimisation involved when it comes to power usage of the SOC... Which leaves us with a little to no chance of next generation consoles surpassing XT performance and CU counts especially when you count in the fact that any hardware accelerated ray tracing and its impact of total power usage it may have.

Seeing as you are named after this architecture I presume I am completely wasting my time typing this to a alt.

GCN is not RDNA

5700 @ 7.5 Teraflops is outperforming Vega 64 at 12.5 Teraflops

And looking at your post post history you predicted Vega 64 performance would be impossible for next gen consoles so that prediction will end up being 100% wrong.

Avatar image for appariti0n
#63 Posted by appariti0n (2806 posts) -

Ryzen 3000 yes, navi no.

I dont really have a problem with the image quality put out by the ps4 pro, I'm sure the x1x puts out even nicer image quality.

Will be nice to see such a huge upgrade from that hot garbage cpu though!

Avatar image for HalcyonScarlet
#64 Posted by HalcyonScarlet (8454 posts) -

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

Avatar image for ronvalencia
#65 Edited by ronvalencia (28230 posts) -

@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

Avatar image for ronvalencia
#67 Edited by ronvalencia (28230 posts) -

@Grey_Eyed_Elf said:
@rdnav2 said:
@Grey_Eyed_Elf said:

Why are you people measuring die size as if it matters most?...

X1X managed to get higher than RX 480 CU counts because the RX 480 was a 150w TDP card and all they got was 4 CU's more so, but the 5700 XT is a 225w TDP card... Higher CU than a XT?... Yeah not happening.

Based on TDP of Navi the next generation of consoles are looking at 5700 Pro with dedicated ray tracing cores.

52-60CU's?... GTFO, those cards when released on PC will be 250-300w GPU's running at 94c. Its just not happening on a console.

Navi is hot and runs higher TDP than polaris with the same CU counts

  • 5700 - 36 CU - 180w TDP - 84c with a better blower cooler
  • 480 - 36 CU - 150w TDP - 79c with a worse blower cooler

A 52-60CU Navi chip on a APU with additional ray tracing cores design to be cooled with one fan and run at 150w in a plastic box?... You guys are retarded.

  • 40 CU with 4 CU's disabled = Worst case
  • 44 CU with 4 CU's disabled = Most likely
  • 48 CU with 4 CU's disabled = best case

There is a reason why Microsoft didn't talk about teraflops during their reveal event because their chips will not be higher than XT levels, it might have the same CU count but would have to lower clocks to get those CU's which would lower the TFLOP count to 9-10.

X1X got higher CU's than a RX 480 which was low powered GPU and the X1X runs at lower clocks just to get those extra CU's which is why they are both 6TFLOP, the XT is not a low powered GPU at all its TDP is closer to a 1080 Ti than it is a 1060.

Just look at the difference 4 extra CU's does to Navi in terms of TDP 5700 vs XT:

60CUs?...

You guys are drunk.

Why so hostile? You’re probably one of those guys who was positive Navi was GCN. You probably thought a next gen Console with Vega 64 performance was impossible.

Am I wrong?

And FYI 5700 XT draws more power than 5700 because it’s clocked higher..

It is GCN, its not a new architecture just a heavily modified one.

As for performance all we can do is speculate based on KNOWN information and prior entries into that architecture or road map that AMD had set out.

No one knew till last minute that AMD would modify GCN to that degree before hand.

It draws more power because of the combination of clocks and CU count along with voltages.

Again to touch on the speculation part... We still have no idea what the next generation of consoles will have in terms of CU from Navi all we have to base any speculation on is the TDP of desktop hardware with a 20-30% adjustment for console optimisation involved when it comes to power usage of the SOC... Which leaves us with a little to no chance of next generation consoles surpassing XT performance and CU counts especially when you count in the fact that any hardware accelerated ray tracing and its impact of total power usage it may have.

Seeing as you are named after this architecture I presume I am completely wasting my time typing this to a alt.

RDNA is not GCN e.g. different wavefront processing behavior.

1. Under GCN's wave64 compute model, the programmer and JIT driver re-compiler (PC only)has to populate 64 shader threads on wave64 payload to maximize usage rates before dispatching wave64 to the GPU. Full populated wave64 population is rarely reached.

Larger the SIMD/Wavefront width = harder to populate with non-dependent data elements.

There's extra problem with GCN when it needs four wave64 payload to hide four cycle latency design flaw per CU's quad lanes. This is problematic since that's four wave64 worth non-dependent data elements.

There's yet another problem with GCN, since wave64 is executed with four clock cycles, it holds/locks related registers for four clock cycles, which increase register storage pressure.

GCN is designed for extreme parallelism in the server compute workloads.

2. Under NAVI's wave32 compute model, the programmer and JIT driver re-compiler (PC only) has populate 32 shader threads on wave32 payload to maximize usage rates before dispatching wave32 to the GPU.

NAVI's wave32 doesn't need extra wave-fronts to hide it's latency like GCN's

NAVI's wave32 doesn't lock up related register storage over 1 clock cycles like GCN's

Programming assumptions for NAVI's wave32 is treated like NVIDIA CUDA v7 wave32.

NAVI has wave64 mode which is executed over two clock cycles which it's GCN backwards compatibility mode which is an improvement over GCN's four clock cycles wave64 processing.

You're arguing without programming knowledge.

Native RDNA wave32 is not GCN from the programmer's viewpoint.

---------------

With extra silicon maturity time, Project Scarlet the benefits of 7nm+ UV (15 percent less power consumption and 20 percent extra transistor density) and RDNA v2 design maturity.

RDNA v1 = tick

RDNA v2 = tock

Xbox Scorpio has under-voltage optimization which is missing on AMD PC GPU SKUs and PS4 Pro.

There's high chance for Scarlet matching RX-5700 XT via alternative means just as Scorpio matching RX-580 in basic TFLOPS goals. Scarlet already follows Scorpio on wider memory bus access for the GPU.

Zen v2 has worst memory access latency rates when compared Zen v1.5, but Zen v2's large L3 "game cache" was able to cover it i.e. reduce external memory access hit rates. My point, Zen v2's large "L3 game cache" benefits unified memory bus designs like Scarlet.

Zen v2's large L3 "game cache" reduces CCX data copy and external memory hit rates by 7nm transistor density brute force.

Multi-module chiplet design is not penalty free e.g. Intel Coffeelake still has the lowest latency rates, but AMD's Zen v2 approach is good to match Skylake/Kabylake like gaming results.

Avatar image for HalcyonScarlet
#68 Posted by HalcyonScarlet (8454 posts) -

@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Avatar image for goldenelementxl
#69 Posted by GoldenElementXL (3420 posts) -

@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Ron is the guy that claimed the Xbox One X would match and even exceed GTX 1070 performance for $399. (thread) His crystal ball shows the future in LaLa Land, not Earth.

Avatar image for rdnav2
#70 Posted by RDNAv2 (35 posts) -

@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

it Can be done

Avatar image for fedor
#71 Posted by Fedor (5335 posts) -

@rdnav2: No, no it can't.

Avatar image for ronvalencia
#72 Edited by ronvalencia (28230 posts) -

@goldenelementxl said:
@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Ron is the guy that claimed the Xbox One X would match and even exceed GTX 1070 performance for $399. (thread) His crystal ball shows the future in LaLa Land, not Earth.

Bullshit.

https://www.gamespot.com/forums/system-wars-314159282/the-official-xbox-one-scorpio-announcement-thread-33240401/?page=4

Scorpio's GPU solution (6 TFLOPS) is faster than RX-480 8GB and R9-390X. RX-480 (5.83 TFLOPS) trade blows with R9-Fury non-X with leaked game play.

Slightly reducing graphics detail yields large boost in frame rates.

----

Games such as Gears of War 4 4K, Forza Motorsport 7 4K and Far Cry 5 4K has shown X1X being superior over R9-390X and RX-580, hence the next level GPUs are GTX 980 Ti and GTX 1070.

X1X's Killer Instinct 4K, Gears of War 4 4K, Forza Motorsport 7 4K, Resident Evil 7 4K and Far Cry 5 4K beats GTX 1060. Don't expect X1X will deliver comparable frame rates at 1440p when it's CPU bound.

X1X doesn't have automatic tile cache render like NVIDIA smart drivers, hence down to individual game titles and programmers.

https://www.eurogamer.net/articles/digitalfoundry-2018-resident-evil-7-xbox-one-x-offers-a-big-leap-over-standard-console

Resident Evil 7 X1X patch yields 4K 60 hz. Digitalfoundry couldn't detect checkerboard for X1X 4K 60 hz patch.

At Ultra Settings on Resident Evil 7 PC

https://www.guru3d.com/articles_pages/resident_evil_7_pc_graphics_performance_benchmark_review,7.html

R9-390X (at 5.9 TFLOPS, 44 CU) already rivals GTX 980 Ti and GTX 1070 (~6.5 TFLOPS)! Programmers for RE7 has to avoided stepping on GCN performance landmines. Beyond R9-390X, GCN's IPC drops.

RX 5700 XT's wave32 without the need for latency hiding tricks enables AMD's GPU TFLOPS be exploited better.

R9-390X, 44 CU, 1MB L2 cache and 64 ROPS with few KB render cache without DCC.

X1X, 40 CU, 2MB cache, 32 ROPS with DCC and 2MB render cache. X1X has Variable Rate Shading like feature.

https://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Andrew Goossen tells us that the GPU supports extensions that allow depth and ID buffers to be efficiently rendered at full native resolution, while colour buffers can be rendered at half resolution with full pixel shader efficiency. Based on conversations last year with Mark Cerny, there is some commonality in approach here with some of the aspects of PlayStation 4 Pro's design, but we can expect some variation in customisations - despite both working with AMD, we're reliably informed that neither Sony or Microsoft are at all aware of each other's designs before they are publicly unveiled.

Depth buffer deals with geometry render buffer

https://developer.nvidia.com/vrworks/graphics/variablerateshading

X1X GPU has Variable Rate Shading feature that doesn't exist on PC GCNs and NVIDIA Pascal!

Variable Rate Shading feature delivers native geometry edges which conserves ROPS and shader compute resources. Variable Rate Shading is better than Sony's hardware checkerboard resolve on PS4 Pro.

Games such as RDR2 shows large resolution jump between X1X and PS4 Pro while PS4's and PS4 Pro's resolution scales by TFLOPS increase. Higher memory bandwidth and variable rate shading tricks can contribute with the resolution jump.

Variable Rate Shading tricks can defeat Digital Foundry's diagonal line pixel count method since geometry edges remains at native resolution render.

RX 5700 XT is like Hawaii's IPC scaled into 9.75 TFLOPS i.e.

(9.75 TFLOPS / 5.9 TFLOPS) x R9-390X's 35 fps = 57.8 fps which is close to Titan X Pascal's 58 fps.

Titan X Pascal is less than GTX 1080 TI.

Avatar image for ronvalencia
#73 Edited by ronvalencia (28230 posts) -

@rdnav2 said:
@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

it Can be done

It can be done, IF MS jumps beyond "4 primitive triangle" units e.g. six primitive triangle units with 384 bit bus.

If Scarlet stays with NAVI 10's "4 primitive triangle" units with CU count increase then it would suffer the same raster bottlenecks as Vega 64/Vega II.

There's a reason why NVIDIA scales to six GPCs at around 10 to 17 TFLOPS starting with RTX 2070 Super which is cheapest six GPC equipped Turing.

RTX 2070 Super 's 2560 CUDA FP cores equates to 40 CU on NAVI config.

Avatar image for ronvalencia
#74 Edited by ronvalencia (28230 posts) -

@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

AMD based game console's profit margins are around break-even range while NVIDIA rakes in large profit margins. Game consoles has different business models.

For PC add-In-board (AIB) graphics products, there's a profit stage at NVIDIA's silicon production and at AIB board production.

For example

NVIDIA and it's AIB partners charges GTX 1060 with 200 mm2 GPU and 192 bus GDDR5-8000 similar to AMD's RX-580's 232 mm2 GPU with 256 bit bus GDDR5-8000. NVIDIA and it's AIB partners are laughing all the way to the bank!

There's a reason for NVIDIA's superior profitability when compared to AMD's RTG with AMD was close to being bankrupt e.g. shares was classified as junk prior to Zen's release!

For game consoles, MS/Sony assumes the silicon production risk and PCB board production. Game console tax subsidize the game console hardware i.e. there's cost reduction advantage with vertical integration.

Avatar image for goldenelementxl
#75 Edited by GoldenElementXL (3420 posts) -

@ronvalencia said:
@goldenelementxl said:
@HalcyonScarlet said:
@ronvalencia said:

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Ron is the guy that claimed the Xbox One X would match and even exceed GTX 1070 performance for $399. (thread) His crystal ball shows the future in LaLa Land, not Earth.

Bullshit.

Avatar image for HalcyonScarlet
#76 Posted by HalcyonScarlet (8454 posts) -

@ronvalencia said:
@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

AMD based game console's profit margins are around break-even range while NVIDIA rakes in large profit margins. Game consoles has different business models.

For PC add-In-board (AIB) graphics products, there's a profit stage at NVIDIA's silicon production and at AIB board production.

For example

NVIDIA and it's AIB partners charges GTX 1060 with 200 mm2 GPU and 192 bus GDDR5-8000 similar to AMD's RX-580's 232 mm2 GPU with 256 bit bus GDDR5-8000. NVIDIA and it's AIB partners are laughing all the way to the bank!

There's a reason for NVIDIA's superior profitability when compared to AMD's RTG with AMD was close to being bankrupt e.g. shares was classified as junk prior to Zen's release!

For game consoles, MS/Sony assumes the silicon production risk and PCB board production. Game console tax subsidize the game console hardware i.e. there's cost reduction advantage with vertical integration.

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

Avatar image for ronvalencia
#77 Edited by ronvalencia (28230 posts) -

@HalcyonScarlet said:
@ronvalencia said:

AMD based game console's profit margins are around break-even range while NVIDIA rakes in large profit margins. Game consoles has different business models.

For PC add-In-board (AIB) graphics products, there's a profit stage at NVIDIA's silicon production and at AIB board production.

For example

NVIDIA and it's AIB partners charges GTX 1060 with 200 mm2 GPU and 192 bus GDDR5-8000 similar to AMD's RX-580's 232 mm2 GPU with 256 bit bus GDDR5-8000. NVIDIA and it's AIB partners are laughing all the way to the bank!

There's a reason for NVIDIA's superior profitability when compared to AMD's RTG with AMD was close to being bankrupt e.g. shares was classified as junk prior to Zen's release!

For game consoles, MS/Sony assumes the silicon production risk and PCB board production. Game console tax subsidize the game console hardware i.e. there's cost reduction advantage with vertical integration.

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

From MS's E3 2019 reveal,

1. Scarlet's APU GDDRx memory bus has exceeded PS4/PS4 Pro's 256 bit trace lines PCB design which is at least 320 bit wide bus.

2. Scarlet's APU chip area size has rivaled or exceeded Scorpio's APU chip area size.

3. Both Scorpio and XBO has similar APU chip size which 359 mm2 and 363 mm2 respectively. PS4 has 348 mm2 size APU.

4. TSMC's 1st gen 7nm 8 core Zen v2 chiplet + NAVI 10 XT has 321 mm2 combined area size. Next year's 2nd gen 7nm+ UV has 20 percent improved transistor density when compared to 1st gen 7nm. Microsoft confirms hardware accelerated ray tracing, hence placing Scarlet into "RDNA 2" era improvements.

5. Scorpio has automatic under-voltage profile per APU silicon quality which is tighter than AMD's PC NAVI voltage curve profile.

Conclusion: Microsoft is following Scorpio's design approach with Scarlet.

In terms of PCB design, Scarlet is not PS4/PS4 Pro 256 bit GDDRx bus PCB approach with updated 7nm era parts.

7nm to 7nm+ is nearly a half node jump due to improved 20 percent transistor density and 15 percent lower power consumption.

@HalcyonScarlet said:

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

You're forgetting MS's E3 2019 reveal.

Scarlet is not PS4/PS4 Pro's 256bit GDDRx PCB with updated 7nm drop-in replacement parts.

Scarlet is like Scorpio with updated 7nm era parts and GDDR6-14000.

Try again.

Avatar image for ronvalencia
#78 Edited by ronvalencia (28230 posts) -

@goldenelementxl said:
@ronvalencia said:
@goldenelementxl said:
@HalcyonScarlet said:

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Ron is the guy that claimed the Xbox One X would match and even exceed GTX 1070 performance for $399. (thread) His crystal ball shows the future in LaLa Land, not Earth.

Bullshit.

Where's "GTX 1070" in your posted screenshots?

Notice "Small Vega that replaces the smaller RX-480 Not large "Vega 10" with HBM 2

X1X's GPU 281 mm2 is slightly larger than RX-480's 232 mm2 smaller than Vega 10(code name for Vega 56/64)

My laptop has RX Vega 8 (model number not code name) mobile. Intel's "Vega 24" doesn't have RPM feature.

  • Vega doesn't just equal RPM. Vega has other improvements with ROPS and perf/watt.

https://gpucuriosity.wordpress.com/2017/09/10/xbox-one-xs-render-backend-2mb-render-cache-size-advantage-over-the-older-gcns/

In terms of silicon maturity, X1X is based on Vega.

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.

  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

--------

Polaris 30 RX-590's (225 watts TDP / 36 CU) scaled to 64 CU = 400 watts. Meanwhile Vega 64 has 295 watts TDP.

RX-590's 1545 Mhz has similar boost clock speed to Vega 64's 1536 Mhz.

AMD's perf/watts road map is real.

https://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020-6.html

RX-580's actual typical gaming power consumption is 209 watts, hence it overshot AMD's 185 watts TDP target.

X1X typical gaming power consumption for the entire machine is about 172 watts. https://www.anandtech.com/show/11992/the-xbox-one-x-review/6 hence the GPU is somewhere 145 watts where CPU has 20 watts + 2.5 inch HDD has 2.5 watts + cut-down South Bridge has ~2.5 watts.

https://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html

RX 480's gaming has 164 watts power consumption which overshot AMD's 150 watts TDP target. PS4 Pro GPU was reduce to 911 Mhz to budget for CPU+2.5 HDD+Optical drive+controller.

  • AMD's PC Polaris GPUs doesn't have X1X's auto-under voltage VRM design
  • RX-580 doesn't have silicon maturity like Vega
  • X1X GPU's perf/watt is NOT PC's Polaris.AMD/MS didn't sit back doing nothing between RX-480's June 20016 release to X1X's engineering release 1st silicon December 2016.

X1X's perf/watt against PS4 is about 3.2 which is about Vega's perf/watt

PS4 Pro's perf/watt against PS4 is about 2.3 which is about 1st gen Polaris's perf/watt

  • X1X has "Variable Rate Shading" like feature which doesn't exist with PC's Vega and RDNA v1 SKUs

From https://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Andrew Goossen tells us that the GPU supports extensions that allow depth and ID buffers to be efficiently rendered at full native resolution, while colour buffers can be rendered at half resolution with full pixel shader efficiency. Based on conversations last year with Mark Cerny, there is some commonality in approach here with some of the aspects of PlayStation 4 Pro's design, but we can expect some variation in customisations - despite both working with AMD, we're reliably informed that neither Sony or Microsoft are at all aware of each other's designs before they are publicly unveiled.

Depth buffer deals with geometry render buffer

https://developer.nvidia.com/vrworks/graphics/variablerateshading

AMD owns the patent for variable rate shading... with X1X, before Turing.

AMD's PC GPUs has it's own engineering team which is ~1/3 of RTG.

Try again.

Avatar image for goldenelementxl
#79 Edited by GoldenElementXL (3420 posts) -

@ronvalencia: It's not in that response, but you infamously made the 1070 and 1080 claims on this board MULTIPLE times. I was just posting your price claims

Pro tip: Use fewer words. You talk yourself in circles in your posts and it's beyond funny at this point

Avatar image for Pedro
#80 Edited by Pedro (35613 posts) -

@goldenelementxl said:

@ronvalencia: It's not in that response, but you infamously made the 1070 and 1080 claims on this board MULTIPLE times. I was just posting your price claims

Pro tip: Use fewer words. You talk yourself in circles in your posts and it's beyond funny at this point

X1X's GPU 281 mm2 is slightly larger than RX-480's 232 mm2 smaller than Vega 10(code name for Vega 56/64)

My laptop has RX Vega 8 (model number not code name) mobile. Intel's "Vega 24" doesn't have RPM feature.

  • Vega doesn't just equal RPM. Vega has other improvements with ROPS and perf/watt.

https://gpucuriosity.wordpress.com/2017/09/10/xbox-one-xs-render-backend-2mb-render-cache-size-advantage-over-the-older-gcns/

In terms of silicon maturity, X1X is based on Vega.

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.

  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

--------

Polaris 30 RX-590's (225 watts TDP / 36 CU) scaled to 64 CU = 400 watts. Meanwhile Vega 64 has 295 watts TDP.

RX-590's 1545 Mhz has similar boost clock speed to Vega 64's 1536 Mhz.

AMD's perf/watts road map is real.

https://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020-6.html

RX-580's actual typical gaming power consumption is 209 watts, hence it overshot AMD's 185 watts TDP target.

X1X typical gaming power consumption for the entire machine is about 172 watts. https://www.anandtech.com/show/11992/the-xbox-one-x-review/6 hence the GPU is somewhere 145 watts where CPU has 20 watts + 2.5 inch HDD has 2.5 watts + cut-down South Bridge has ~2.5 watts.

https://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html

RX 480's gaming has 164 watts power consumption which overshot AMD's 150 watts TDP target. PS4 Pro GPU was reduce to 911 Mhz to budget for CPU+2.5 HDD+Optical drive+controller.

  • AMD's PC Polaris GPUs doesn't have X1X's auto-under voltage VRM design
  • RX-580 doesn't have silicon maturity like Vega
  • X1X GPU's perf/watt is NOT PC's Polaris.AMD/MS didn't sit back doing nothing between RX-480's June 20016 release to X1X's engineering release 1st silicon December 2016.

X1X's perf/watt against PS4 is about 3.2 which is about Vega's perf/watt

PS4 Pro's perf/watt against PS4 is about 2.3 which is about 1st gen Polaris's perf/watt

I am a terrible person. :(

Avatar image for goldenelementxl
#81 Posted by GoldenElementXL (3420 posts) -

@Pedro:

Avatar image for kali-b1rd
#82 Posted by Kali-B1rd (2242 posts) -
@ezekiel43 said:

Not really, it doesn't get my hopes up. Amazing graphics aren't gonna make a boring game fun. The majority of the graphics-intensive exclusives are still boring.

Ding Ding Ding, we have a winner.

Avatar image for WESTBLADE
#83 Posted by WESTBLADE (539 posts) -
@Juub1990 said:
@sakaixx said:

The Ryzen 3rd gen though, really made the intel lineup obsolete.

Intel still performs better in games so I wouldn't say that.

That and among other things (14nm vs 7nm, anyone?) was released almost 1 year ago...

Avatar image for howmakewood
#84 Posted by Howmakewood (5960 posts) -

@Pedro: hard to argue with the facts!

Avatar image for HalcyonScarlet
#85 Posted by HalcyonScarlet (8454 posts) -

@ronvalencia: Don't need to "try again", as I said, I'll believe it when I see it. Remember, I'm not the one making claims.

Avatar image for ronvalencia
#86 Edited by ronvalencia (28230 posts) -

@goldenelementxl said:

@ronvalencia: It's not in that response,

That's goal post shifting and you didn't read properly. You lied.

@goldenelementxl said:

but you infamously made the 1070 and 1080 claims on this board MULTIPLE times. I was just posting your price claims

Pro tip: Use fewer words. You talk yourself in circles in your posts and it's beyond funny at this point

Repost my claim.

1. Forza Motorsport 7 X1X wet tracks rivaled GTX 1070 and faster than R9-390X's results.

deal with it

2. You missed IF statement condition. Your arguement is fake news.

Avatar image for ronvalencia
#87 Posted by ronvalencia (28230 posts) -

@HalcyonScarlet said:

@ronvalencia: Don't need to "try again", as I said, I'll believe it when I see it. Remember, I'm not the one making claims.

You argued

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

MS's E3 2019 revealed physical attributes for Scarlet being similar to Scorpio but with updated 7nm era CPU/GPU and GDDR6 parts.

Scarlet is not PS4/PS4 Pro class PCB which is built on mainstream gamer's 256 bit bus PCB video card.

Avatar image for HalcyonScarlet
#88 Edited by HalcyonScarlet (8454 posts) -

@ronvalencia said:
@HalcyonScarlet said:

@ronvalencia: Don't need to "try again", as I said, I'll believe it when I see it. Remember, I'm not the one making claims.

You argued

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

MS's E3 2019 revealed physical attributes for Scarlet being similar to Scorpio but with updated 7nm era CPU/GPU and GDDR6 parts.

Scarlet is not PS4/PS4 Pro class PCB which is built on mainstream gamer's 256 bit bus PCB video card.

I'm arguing that the RX 5700 XT in particular is unrealistic, and you argued they would fit a '$400 GPU in a $400 console' and I said 'I'd believe it when I see it'.

You have a better chance it'll be the RX 5700 at best, not XT. The power consumption is also closer to what they'd want on a console at 180 watts. But even that's expensive.

"Performance-wise, AMD says that the RX 5700 delivers an average 10 per cent lead over the RTX 2060, while the RX 5700 XT normalises at around a six per cent advantage over the RTX 2070."

https://www.eurogamer.net/articles/digitalfoundry-2019-amd-rx-5700-5700-xt-full-specs-revealed

It sounds more realistic. If they were really going to push performance, but try to keep the costs down which is what they need to do on a base console (first console of a generation), this maybe how.

The idea that they would put a pricey GPU in a cost effective console, that is slightly more powerful than an RTX 2070 (RX 5700 XT) is not realistic at all.

It might even be a modified but limited version of the RX 5700.

Avatar image for ajstyles
#89 Edited by AJStyles (1075 posts) -

LOL at hermits downplaying next gen consoles when it is proven that the majority of hermits use low tier PC’s. Steam confirmed most of you guys use old PC’s and play at medium settings/1080p at best.

Yet you bash PS5 when it will be more powerful than what most hermits have. And since it’s a closed system it’s resources aren’t being drained like a PC.

PS5 is going to be a huge upgrade over the PS4 and hermits are ignorant for downplaying it.

Avatar image for ronvalencia
#90 Edited by ronvalencia (28230 posts) -

@HalcyonScarlet said:
@ronvalencia said:

You argued

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

MS's E3 2019 revealed physical attributes for Scarlet being similar to Scorpio but with updated 7nm era CPU/GPU and GDDR6 parts.

Scarlet is not PS4/PS4 Pro class PCB which is built on mainstream gamer's 256 bit bus PCB video card.

I'm arguing that the RX 5700 XT in particular is unrealistic, and you argued they would fit a '$400 GPU in a $400 console' and I said 'I'd believe it when I see it'.

You have a better chance it'll be the RX 5700 at best, not XT. The power consumption is also closer to what they'd want on a console at 180 watts. But even that's expensive.

"Performance-wise, AMD says that the RX 5700 delivers an average 10 per cent lead over the RTX 2060, while the RX 5700 XT normalises at around a six per cent advantage over the RTX 2070."

https://www.eurogamer.net/articles/digitalfoundry-2019-amd-rx-5700-5700-xt-full-specs-revealed

It sounds more realistic. If they were really going to push performance, but try to keep the costs down which is what they need to do on a base console (first console of a generation), this maybe how.

The idea that they would put a pricey GPU in a cost effective console, that is slightly more powerful than an RTX 2070 (RX 5700 XT) is not realistic at all.

It might even be a modified but limited version of the RX 5700.

Who's $400 GPU?

  • I argued game console hardware's profit margins are tighter and less profit margin overheads when compared to NVIDIA's PC GPU SKUs.
  • I argued PC's RX-5700 XT has AMD's silicon production profit margin AND add-in-board vendor's PCB production profit margin while on Microsoft assumes both silicon and PCB board production risk, hence MS's vertical integration has cost savings.
  • Microsoft's reveals Scarlet PCB being in Scorpio level PCB bus memory width .
  • Microsoft's reveals Scarlet APU's chip area size rivals and exceeds Scorpio APU's chip area size.

If next gen consoles recycles RX-5700 XT's 4 prim units**, the overall geometry-rasterization GPU framework wouldn't be Turing with six GPC units level.

**NAVI 10 has 8 triangle inputs with 4 triangle output as per working back face culling hardware. Polaris/Vega 64/Vega II back face calling hardware is broken due to 4 triangle inputs with 4 triangle output yielding 2 triangle output which needs compute shader workaround.

GCN's IPC drops after R9-390X since following Fury Pro/X/Vega 56/64/II GPUs didn't scale the Hawaii XT's geometry-rasterization GPU framework with compute power increase i.e. AMD can't keep adding CUs.

RX-5700 XT has Vega II's rasterization power with improved back-face culling and double width texture filtering hardware, The relationship between RX-5700 XT and Vega II reminds me of R9-390X OC and Fury X.

This is GPU hardware basics 101. GPUs should NOT be DSPs.

Don't worry about next gen consoles when GPU fundamentals hardware design issues are not addressed.

NAVI 10 has four prim units while TU106 has four GPC units.

NAVI 10 has 64 ROPS units while TU106 has 64 ROPS units.

NAVI 10 has 4MB L2 cache while TU106 has 4MB L2 cache.

NAVI 10 has GDDR6-14000 256 bit bus while TU106 has GDDR6-14000 256 bit bus.

NAVI 10's fundamental GPU hardware design is like TU106 class.

TU104 has six GPC units scaling with compute TFLOPS increase, hence NVIDIA knows GPU fundamentals hardware design basics 101.

RX 5700 XT seems to be ~1.8 Ghz overclocked R9-390X with Polaris/Vega/NAVI improvements and design corrections.

Avatar image for Gatygun
#91 Edited by Gatygun (1591 posts) -
@goldenelementxl said:
@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:

Will believe these consoles will perform like this when I see it. Pretty much every gen these claims are made.

These graphics cards on their own cost as much as the consoles as a whole will cost.

1. The main difference with next game consoles are the desktop PC's CPU level IPC.

2. MS or Sony assumes the production risk NOT AMD/AIB board partners, hence removing profit margins at AMD's GPU chip production and AIB (add-in-board) vendor levels.

So you are telling me, Sony/MS are going to take an otherwise $400 graphics card, stuff it on an APU (I assume) and put it in a $400 console by 'removing profit margins'?

Ron is the guy that claimed the Xbox One X would match and even exceed GTX 1070 performance for $399. (thread) His crystal ball shows the future in LaLa Land, not Earth.

Dude started out with 1080 that it was going to perform like that. He's completely delusional when it comes to tech. All he can push out is random charts and numbers that he tries to spin and make it something that fits his bill. Yet it never happens.

Tha xbox one X isn't even coming close to 1080 in performance no matter how hard he tried to spin it then he moved down to 1070 in order to make a point. The box can't even run pugb at any decent framerate lol.

Then him thinking that the next box is going to push 1080 ti performance which sits higher then a 5700 xt which already dies from overheating as a dedicated card with loud fan noise with it that is already far bigger then consoles feature. Yea sound realistic man.

I bet he really likes to pinpoint that one game out of the 50 like the did with forza endlessly before like it proofs some kind of point.

The current rumors are that the PS5 sits at 20k firestrike score from what got leaked with a 5700xt underclocked and a 3700 underclocked cpu with it if you replicate it on pc with that tech. Which equals a 1080 gtx and with a underclocked cpu and 4k focus yea good luck with that.

And that's if they go full out on power supply in that box which is highly doubtful. And clock every chip to the highest standards which is not practical. lots of those chips will be duds and clocks will move down because of it. It's not like the 3000 series ryzen has lots of overhead.

And with the 30% cut from trump for the US, either that thing gets more expensive or they going to cut corners which is also a high possibility.

In short to give you a idea about PC tech in general. 1080 right now is seen by nvidia is low end GPU, it equals a 2060 which is getting phased out already as lowest card. Next generation the 1080 ti / 2070 super/ 2080 are going to be low endish.

These boxes are what they are and that's pretty much it. It's a improvement for console users but frankly PC tech advanced so much forwards it's hard for them to get anywhere near it at this point in time.

Avatar image for HalcyonScarlet
#92 Edited by HalcyonScarlet (8454 posts) -

@ronvalencia said:
@HalcyonScarlet said:
@ronvalencia said:

You argued

The GPU can not take even near three quarters of the entire machines cost. As it is, consoles other than the Xbox One X, are usually built like junk with corner cutting everywhere.

The only reason the Xbox One X is higher quality is because they can afford to sell that at a much higher price, because it's a luxury version of the console, not the base unit.

The next Xbox console as the base unit has to be EXTREMELY cost competitive out of the gate to get that early market share. They need to be able to go as low as $349 as soon as they can.

Do you think MS would risk their market competitiveness so you can get funny feelings in your pants about the graphics?

MS's E3 2019 revealed physical attributes for Scarlet being similar to Scorpio but with updated 7nm era CPU/GPU and GDDR6 parts.

Scarlet is not PS4/PS4 Pro class PCB which is built on mainstream gamer's 256 bit bus PCB video card.

I'm arguing that the RX 5700 XT in particular is unrealistic, and you argued they would fit a '$400 GPU in a $400 console' and I said 'I'd believe it when I see it'.

You have a better chance it'll be the RX 5700 at best, not XT. The power consumption is also closer to what they'd want on a console at 180 watts. But even that's expensive.

"Performance-wise, AMD says that the RX 5700 delivers an average 10 per cent lead over the RTX 2060, while the RX 5700 XT normalises at around a six per cent advantage over the RTX 2070."

https://www.eurogamer.net/articles/digitalfoundry-2019-amd-rx-5700-5700-xt-full-specs-revealed

It sounds more realistic. If they were really going to push performance, but try to keep the costs down which is what they need to do on a base console (first console of a generation), this maybe how.

The idea that they would put a pricey GPU in a cost effective console, that is slightly more powerful than an RTX 2070 (RX 5700 XT) is not realistic at all.

It might even be a modified but limited version of the RX 5700.

Who's $400 GPU?

  • I argued game console hardware's profit margins are tighter and less profit margin overheads when compared to NVIDIA's PC GPU SKUs.
  • I argued PC's RX-5700 XT has AMD's silicon production profit margin AND add-in-board vendor's PCB production profit margin while on Microsoft assumes both silicon and PCB board production risk, hence MS's vertical integration has cost savings.
  • Microsoft's reveals Scarlet PCB being in Scorpio level PCB bus memory width .
  • Microsoft's reveals Scarlet APU's chip area size rivals and exceeds Scorpio APU's chip area size.

If next gen consoles recycles RX-5700 XT's 4 prim units**, the overall geometry-rasterization GPU framework wouldn't be Turing with six GPC units level.

**NAVI 10 has 8 triangle inputs with 4 triangle output as per working back face culling hardware. Polaris/Vega 64/Vega II back face calling hardware is broken due to 4 triangle inputs with 4 triangle output yielding 2 triangle output which needs compute shader workaround.

GCN's IPC drops after R9-390X since following Fury Pro/X/Vega 56/64/II GPUs didn't scale the Hawaii XT's geometry-rasterization GPU framework with compute power increase i.e. AMD can't keep adding CUs.

RX-5700 XT has Vega II's rasterization power with improved back-face culling and double width texture filtering hardware, The relationship between RX-5700 XT and Vega II reminds me of R9-390X OC and Fury X.

This is GPU hardware basics 101. GPUs should NOT be DSPs.

Don't worry about next gen consoles when GPU fundamentals hardware design issues are not addressed.

NAVI 10 has four prim units while TU106 has four GPC units.

NAVI 10 has 64 ROPS units while TU106 has 64 ROPS units.

NAVI 10 has 4MB L2 cache while TU106 has 4MB L2 cache.

NAVI 10 has GDDR6-14000 256 bit bus while TU106 has GDDR6-14000 256 bit bus.

NAVI 10's fundamental GPU hardware design is like TU106 class.

TU104 has six GPC units scaling with compute TFLOPS increase, hence NVIDIA knows GPU fundamentals hardware design basics 101.

RX 5700 XT seems to be ~1.8 Ghz overclocked R9-390X with Polaris/Vega/NAVI improvements and design corrections.

I said $400 GPU right in the very beginning: "These graphics cards on their own cost as much as the consoles as a whole will cost." and "So you are telling me, Sony/MS are going to take an otherwise $400 graphics card...". And you went on to argue about how it can be done.

When I said a $400 GPU, that's how much the 5700 XT cost. And even with profit reductions, it'll be a stretch to get that in there.

Don't end a post "Try again" and try to tell me you're not arguing my point. And my point has always been clear.

Just step up, and say you're challenging my point, because whether you like it or not, this is the position you have essentially taken. You think the GPU in the next consoles will be a 5700 XT.

Otherwise you have to admit as usual you didn't read what you replied to properly, and you've ended up chasing your own tail and arguing with yourself.

Avatar image for Xplode_games
#93 Edited by Xplode_games (2002 posts) -

@ronvalencia: Most of those benchmarks are at 4k. The 5700 series is targeted at 1440p resolution and it's performance degrades a bit when you increase to a full 4k. Other than that, great post.

Avatar image for Xplode_games
#94 Posted by Xplode_games (2002 posts) -

@Grey_Eyed_Elf: I think you are a bit confused yourself. On consoles, the APU will have more CUs than PC but at lower clocks. The performance could be even better, the same or worse, we don't know yet. That will depend on how many CUs and at what clocks are in the final build.

Avatar image for Grey_Eyed_Elf
#95 Edited by Grey_Eyed_Elf (6493 posts) -

@Xplode_games said:

@Grey_Eyed_Elf: I think you are a bit confused yourself. On consoles, the APU will have more CUs than PC but at lower clocks. The performance could be even better, the same or worse, we don't know yet. That will depend on how many CUs and at what clocks are in the final build.

What?...

Higher CU's than what?...

  • PS4 has more CU's than 7850 $250 130w GPU - Also available to use the 7950 $450 200w GPU but didn't because? TDP and Price
  • The X1X has more CU's than a 480 a $229 150w GPU - Also available to use Vega 56 $400 210w GPU but didn't because? TDP and Price

"Consoles have more CU's..." Almost spat my coffee out, so vague and misleading.

Now you guys think a new XBOX will have a what?... $400 225w GPU... Wait not just that but something better?

Avatar image for rdnav2
#96 Posted by RDNAv2 (35 posts) -

Actually no, 5700 XT has been shown to draw 150w @ 1800 MHz

Clock rates are a much bigger hindrance to power draw than die suze

Avatar image for fedor
#97 Posted by Fedor (5335 posts) -

@rdnav2: Show proof. Also, don't show some undervolted BS YouTube vid that doesn't run stable.

https://www.tomshardware.com/reviews/amd-radeon-rx_5700-rx_5700_xt,6216-5.html

Avatar image for rdnav2
#98 Posted by RDNAv2 (35 posts) -

@fedor said:

@rdnav2: Show proof. Also, don't show some undervolted BS YouTube vid that doesn't run stable.

https://www.tomshardware.com/reviews/amd-radeon-rx_5700-rx_5700_xt,6216-5.html

It’s not Undervolting if your following the same voltage curve

if you adjust that voltage curve further than the stock curve, the power consumption will go further down.

if you don’t set a limit to the clock rate it will boost above 2000 MHz, and thats when you see power draw above 200 watts

Avatar image for rdnav2
#99 Posted by RDNAv2 (35 posts) -

And this is the stock curve, 1200mv at 2090 MHz.

it would be asinine to keep that 1200mv at say 1640 MHz for example.

the stock curve would indicate 910mv at 1640 MHz

Avatar image for howmakewood
#100 Posted by Howmakewood (5960 posts) -
@rdnav2 said:

Power consumption doesn’t look like a problem

https://youtu.be/7_GvOe1_UKs

So do you expect MS/Sony to produce extra amount of chips and toss out the ones that dont function properly on lower voltage?