AMD's 7 nm "Navi 10" silicon may finally address two architectural shortcomings

Avatar image for ronvalencia
#1 Edited by ronvalencia (28049 posts) -

https://www.techpowerup.com/255814/amd-navi-features-8-streaming-engines-possible-rop-count-doubling

AMD's 7 nm "Navi 10" silicon may finally address two architectural shortcomings of its performance-segment GPUs, memory bandwidth, and render-backends (deficiency thereof). The GPU almost certainly features a 256-bit GDDR6 memory interface, bringing about a 50-75 percent increase in memory bandwidth over "Polaris 30." According to a sketch of the GPU's SIMD schematic put out by KOMACHI Ensaka, Navi's main number crunching machinery is spread across eight shader engines, each with five compute units (CUs).

Five CUs spread across eight shader engines, assuming each CU continues to pack 64 stream processors, works out to 2,560 stream processors on the silicon. This arrangement is in stark contrast to the "Hawaii" silicon from 2013, which crammed 10 CUs per shader engine across four shader engines to achieve the same 2,560 SP count on the Radeon R9 290. The "Fiji" silicon that followed "Hawaii" stuck to the 4-shader engine arrangement. Interestingly, both these chips featured four render-backends per shader engine, working out to 64 ROPs. AMD's decision to go with 8 shader engines raises hopes for the company doubling ROP counts over "Polaris," to 64, by packing two render backends per shader engine. AMD unveils Navi in its May 27 Computex keynote, followed by a possible early-July launch.

Here's a better rendition of the 8 Streaming Engines.

--------------

My comment: the above layout reduce the need for AMD overclocking their GPU into high leakage state with high clock speeds.

PS; This is speculative.

Historically, AMD doubled Shader Engine unit count from 7870/7970's dual Shader Engines to R9-290X's quad Shader Engines.

PS4's Liverpool GPU is nearly half of Hawaii's GPU with similar feature set.

It would be stupid for PS5/Xbox Anaconda to recycle PS4 Pro/X1X's existing quad Shader Engine units design.

Avatar image for Yams1980
#2 Edited by Yams1980 (3511 posts) -

I just hope AMD can at some point start making GPUs on par with Nvidia. Last few generations of their gpus have been god awful power hungry things that cost the same as Nvidias gpus. You can't make an inferior product in every way, and then sell it for the same amount. It doesn't work that way.

Nvidia is loving it, they are robbing us all blind and nobody is keeping them in check. I hope its true that Intel is going to start making gpus to get into the gaming market, but I heard it may just be more a mid range type of gpu and they may not get into the high end.

Avatar image for goldenelementxl
#3 Posted by GoldenElementXL (3234 posts) -

Just give me a 12 core, 5ghz CPU. This 2700X is choking the 2080Ti

Avatar image for Random_Matt
#4 Posted by Random_Matt (4241 posts) -

More interested in the Intel GPU's.

Avatar image for lhughey
#5 Posted by lhughey (4642 posts) -
@goldenelementxl said:

Just give me a 12 core, 5ghz CPU. This 2700X is choking the 2080Ti

The 3700X is calling you. :)

It might make me retire my i7-920 (finally)

Avatar image for osan0
#6 Posted by osan0 (15438 posts) -

@Random_Matt: It will be interesting to see what they bring but i have a bad feeling they may disappoint from a gaming standpoint. I reckon the main reason intel is making a GPU is more for compute reasons than gaming. Any gaming performance will be more of a happy by product of the architecture rather than the main focus.

I hope im wrong though and they make an excellent gaming GPU.

As for navi: i hope the changes do address as many problems as possible with their current architecture. beating the crap out of physics is not sustainable.

Avatar image for davillain-
#7 Posted by DaVillain- (36868 posts) -
@goldenelementxl said:

Just give me a 12 core, 5ghz CPU. This 2700X is choking the 2080Ti

My 2700X is handling well paring with 2070 but even so, it can choke it from time to time cause I overclock my 2070. 3700X with 16 cores is all I need and if I can at least overclock it to 4.5HGz, I'll be happy. The 2700X just can't stack up to the new RTX it seems.

Avatar image for Random_Matt
#8 Posted by Random_Matt (4241 posts) -
@osan0 said:

@Random_Matt: It will be interesting to see what they bring but i have a bad feeling they may disappoint from a gaming standpoint. I reckon the main reason intel is making a GPU is more for compute reasons than gaming. Any gaming performance will be more of a happy by product of the architecture rather than the main focus.

I hope im wrong though and they make an excellent gaming GPU.

As for navi: i hope the changes do address as many problems as possible with their current architecture. beating the crap out of physics is not sustainable.

I'll probably just wait till Nvidia release 30** series, until GCN is dropped AMD will still suck.

Avatar image for goldenelementxl
#9 Posted by GoldenElementXL (3234 posts) -

@davillain- said:
@goldenelementxl said:

Just give me a 12 core, 5ghz CPU. This 2700X is choking the 2080Ti

My 2700X is handling well paring with 2070 but even so, it can choke it from time to time cause I overclock my 2070. 3700X with 16 cores is all I need and if I can at least overclock it to 4.5HGz, I'll be happy. The 2700X just can't stack up to the new RTX it seems.

I used to be fine with the fps deficiency AMD has vs Intel until I went 144Hz. Intel gets like 30-50fps more on some games at 1440p. That's just insane. 5Ghz is a must for AMD with Zen 2. I will go with the 12 core over the 16 core if the clock speed rumors are true. And I hope they are because I don't feel like buying a new motherboard right now.

Avatar image for ronvalencia
#10 Edited by ronvalencia (28049 posts) -

@Random_Matt said:
@osan0 said:

@Random_Matt: It will be interesting to see what they bring but i have a bad feeling they may disappoint from a gaming standpoint. I reckon the main reason intel is making a GPU is more for compute reasons than gaming. Any gaming performance will be more of a happy by product of the architecture rather than the main focus.

I hope im wrong though and they make an excellent gaming GPU.

As for navi: i hope the changes do address as many problems as possible with their current architecture. beating the crap out of physics is not sustainable.

I'll probably just wait till Nvidia release 30** series, until GCN is dropped AMD will still suck.

GCN's compute is not the problem e.g. cryptocurrency, but GCN's geometry-raster engines are major bottlenecks. AMD is overclocking their GCNs into electron leaky state to improve geometry-raster engines by high clock speed.

Avatar image for pc_rocks
#11 Posted by PC_Rocks (2446 posts) -

@osan0 said:

@Random_Matt: It will be interesting to see what they bring but i have a bad feeling they may disappoint from a gaming standpoint. I reckon the main reason intel is making a GPU is more for compute reasons than gaming. Any gaming performance will be more of a happy by product of the architecture rather than the main focus.

I hope im wrong though and they make an excellent gaming GPU.

As for navi: i hope the changes do address as many problems as possible with their current architecture. beating the crap out of physics is not sustainable.

Indeed! The reason Intel got into GPUs was because Nvidia/AMD but mostly Nvidia were eating their lunch in HPC/data center solutions.

Avatar image for superfluousreal
#12 Posted by SuperfluousReal (361 posts) -

@ronvalencia said:

https://www.techpowerup.com/255814/amd-navi-features-8-streaming-engines-possible-rop-count-doubling

AMD's 7 nm "Navi 10" silicon may finally address two architectural shortcomings of its performance-segment GPUs, memory bandwidth, and render-backends (deficiency thereof). The GPU almost certainly features a 256-bit GDDR6 memory interface, bringing about a 50-75 percent increase in memory bandwidth over "Polaris 30." According to a sketch of the GPU's SIMD schematic put out by KOMACHI Ensaka, Navi's main number crunching machinery is spread across eight shader engines, each with five compute units (CUs).

Five CUs spread across eight shader engines, assuming each CU continues to pack 64 stream processors, works out to 2,560 stream processors on the silicon. This arrangement is in stark contrast to the "Hawaii" silicon from 2013, which crammed 10 CUs per shader engine across four shader engines to achieve the same 2,560 SP count on the Radeon R9 290. The "Fiji" silicon that followed "Hawaii" stuck to the 4-shader engine arrangement. Interestingly, both these chips featured four render-backends per shader engine, working out to 64 ROPs. AMD's decision to go with 8 shader engines raises hopes for the company doubling ROP counts over "Polaris," to 64, by packing two render backends per shader engine. AMD unveils Navi in its May 27 Computex keynote, followed by a possible early-July launch.

Here's a better rendition of the 8 Streaming Engines.

--------------

My comment: the above layout reduce the need for AMD overclocking their GPU into high leakage state with high clock speeds.

PS; This is speculative.

Historically, AMD doubled Shader Engine unit count from 7870/7970's dual Shader Engines to R9-290X's quad Shader Engines.

PS4's Liverpool GPU is nearly half of Hawaii's GPU with similar feature set.

It would be stupid for PS5/Xbox Anaconda to recycle PS4 Pro/X1X's existing quad Shader Engine units design.

Now if only they could fix their terrible OpenGL performance.

Avatar image for ronvalencia
#13 Posted by ronvalencia (28049 posts) -
Loading Video...

Basics on geometry-raster engines.