Navi and "Next Gen"

  • 70 results
  • 1
  • 2
Avatar image for HalcyonScarlet
HalcyonScarlet

13659

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#51 HalcyonScarlet
Member since 2011 • 13659 Posts

@ronvalencia said:
@HalcyonScarlet said:
@ronvalencia said:

It's AMD's marketing speak when AMD is late to the market. AMD's damage control video is not new.

AMD's price argument is weak when cheaper NVIDIA Super arrives in July 2019.

AMD refuse to acknowledge NAVI's wave32 being similar to CUDA's wave32.

Ahh ha sure. Except they're not wrong. What is the point in implementing a feature in its infancy just for the sake of it.

AMD's aim for hardware accelerated ray-tracing is lighting effects. AMD is not even aiming for multiple different ray-tracing types e.g. lighting and reflections. AMD's ray-tracing argument is geared towards a single ray-tracing type and do it well.

RTX Turing can easily handle single type ray-tracing.

Ray-tracing also needs high memory bandwidth and higher VRAM storage which is gimped on RTX 2060 6GB. RTX 2060 Super has 8GB 256 bit GDDR6-14000 memory.

Ray-tracing should be used with variable shading rate to conserve shader resource and reduce memory bandwidth usage.

Yeah, don't care that much, don't have a bone in this fight. Just saying what Jay was saying and I agree with him.

And to be honest 'a single ray tracing type' does show that it's in its infancy. Kind of sounds like damage control. Just come out and say, they aren't ready to do multiple ray tracing types. If I'm buying into it, I'd want it to do more than a single effect.

Also, you're somewhat making their point. Nvidia's mid range RTXs don't do as good a job, and no matter how you put it, ray tracing has a performance impact. So Navi in the lower range might be better off without it.

The other point he made was that whenever Ray Tracing was brought up, PC gamers just said, it wasn't important or they don't care, and I've seen that round here, but when it isn't on offer, suddenly they're asking where it is.

If you want to say it's damage control, that's fine, it's your opinion. I don't share that opinion.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#52  Edited By ronvalencia
Member since 2008 • 29612 Posts

@HalcyonScarlet said:
@ronvalencia said:
@HalcyonScarlet said:
@ronvalencia said:

It's AMD's marketing speak when AMD is late to the market. AMD's damage control video is not new.

AMD's price argument is weak when cheaper NVIDIA Super arrives in July 2019.

AMD refuse to acknowledge NAVI's wave32 being similar to CUDA's wave32.

Ahh ha sure. Except they're not wrong. What is the point in implementing a feature in its infancy just for the sake of it.

AMD's aim for hardware accelerated ray-tracing is lighting effects. AMD is not even aiming for multiple different ray-tracing types e.g. lighting and reflections. AMD's ray-tracing argument is geared towards a single ray-tracing type and do it well.

RTX Turing can easily handle single type ray-tracing.

Ray-tracing also needs high memory bandwidth and higher VRAM storage which is gimped on RTX 2060 6GB. RTX 2060 Super has 8GB 256 bit GDDR6-14000 memory.

Ray-tracing should be used with variable shading rate to conserve shader resource and reduce memory bandwidth usage.

Yeah, don't care that much, don't have a bone in this fight. Just saying what Jay was saying and I agree with him.

And to be honest 'a single ray tracing type' does show that it's in its infancy. Kind of sounds like damage control. Just come out and say, they aren't ready to do multiple ray tracing types. If I'm buying into it, I'd want it to do more than a single effect.

Also, you're somewhat making their point. Nvidia's mid range RTXs don't do as good a job, and no matter how you put it, ray tracing has a performance impact. So Navi in the lower range might be better off without it.

The other point he made was that whenever Ray Tracing was brought up, PC gamers just said, it wasn't important or they don't care, and I've seen that round here, but when it isn't on offer, suddenly they're asking where it is.

If you want to say it's damage control, that's fine, it's your opinion. I don't share that opinion.

NVIDIA plans to respond by recycling TU104 from RTX 2080 as RTX 2070 Super which should improve Ray-tracing and TFLOPS.

https://wccftech.com/nvidia-rtx-super-graphics-cards-msrp-leaked/

RTX 2080 SUPER $799 MSRP

RTX 2070 SUPER $599 MSRP

RTX 2060 SUPER $429 MSRP, this SKU matched RX 5700's 256bit bus GDDR6 8GB setup..

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

AMD's aim for hardware accelerated ray-tracing is lighting effects. AMD is not even aiming for multiple different ray-tracing types e.g. lighting and reflections. AMD's ray-tracing argument is geared towards a single ray-tracing type and do it well.

RTX Turing can easily handle single type ray-tracing.

Ray-tracing also needs high memory bandwidth and higher VRAM storage which is gimped on RTX 2060 6GB. RTX 2060 Super has 8GB 256 bit GDDR6-14000 memory.

Ray-tracing should be used with variable shading rate to conserve shader resource and reduce memory bandwidth usage.

Look at the 2080TI Look at the huge ass drop in performance that HB RT still deliver,and this is the very best GPU out there,anything from AMD will probably have an equal or worse impact on performance.

And i am sure neither Scarlet or PS5 will get something as powerful as the 2080 RTX.

Avatar image for Grey_Eyed_Elf
Grey_Eyed_Elf

7970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#54 Grey_Eyed_Elf
Member since 2011 • 7970 Posts

@tormentos said:
@ronvalencia said:

AMD's aim for hardware accelerated ray-tracing is lighting effects. AMD is not even aiming for multiple different ray-tracing types e.g. lighting and reflections. AMD's ray-tracing argument is geared towards a single ray-tracing type and do it well.

RTX Turing can easily handle single type ray-tracing.

Ray-tracing also needs high memory bandwidth and higher VRAM storage which is gimped on RTX 2060 6GB. RTX 2060 Super has 8GB 256 bit GDDR6-14000 memory.

Ray-tracing should be used with variable shading rate to conserve shader resource and reduce memory bandwidth usage.

Look at the 2080TI Look at the huge ass drop in performance that HB RT still deliver,and this is the very best GPU out there,anything from AMD will probably have an equal or worse impact on performance.

And i am sure neither Scarlet or PS5 will get something as powerful as the 2080 RTX.

Not to mention that they will not be running ray tracing nor the game on Ultra settings in the first place.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#55  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

AMD's aim for hardware accelerated ray-tracing is lighting effects. AMD is not even aiming for multiple different ray-tracing types e.g. lighting and reflections. AMD's ray-tracing argument is geared towards a single ray-tracing type and do it well.

RTX Turing can easily handle single type ray-tracing.

Ray-tracing also needs high memory bandwidth and higher VRAM storage which is gimped on RTX 2060 6GB. RTX 2060 Super has 8GB 256 bit GDDR6-14000 memory.

Ray-tracing should be used with variable shading rate to conserve shader resource and reduce memory bandwidth usage.

Look at the 2080TI Look at the huge ass drop in performance that HB RT still deliver,and this is the very best GPU out there,anything from AMD will probably have an equal or worse impact on performance.

And i am sure neither Scarlet or PS5 will get something as powerful as the 2080 RTX.

You have posted old Battlefield V build benchmarks.

FreeSync handles 4K red line frame rates.

Variable Shading Rate and Rapid Pack Maths features wasn't used, hence Battlefield V wasn't fully using Turing's other features. Low RTRT improves the frame rate e.g. 4K 54 fps

https://www.guru3d.com/news-story/december-4-battlefield-v-raytracing-dxr-performance-patch-released-(benchmarks).html

RTX 2070 has 4K 36 fps low RTRT which is similar to typical game console's 30 hz. Frame rates can improve with Variable Shading Rate and Rapid Pack Maths.

https://www.guru3d.com/news-story/quick-test-wolfenstein-ii-the-new-colossus-adaptive-shading-benchmarks.html

https://www.reddit.com/r/Amd/comments/bz68iz/amd_finally_has_primitive_shaders_working_in_rx/

AMD finally has primitive shaders working in RX 5700 Series (Ray tracing and Variable rate pixel shading coming in 2020)

From Anandtech (about primitive shaders)

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

About ray tracing and variable rate pixel shading

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

RX 5700 is missing Scorpio GPU feature LOL: Hint: starting with the letter V.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#56  Edited By tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

You have posted old Battlefield V build benchmarks.

RX 5700 is missing Scorpio GPU feature LOL: Hint: starting with the letter V.

You need to stop the benchmark i posted and your have not even 3 weeks of difference by their own sites.

BUt even taking yours as valid RT still delivers a fu**ing blow of more than 100% to performance of that 2080TI which i am sure neither the PS5 or Scarlet will have inside.

The PS5 nor Scarlet will get a GPU as powerful as the RTX 2080 they simply will not get one as powerful,so any performance hit shown here i am sure will apply to the PS5 and Scarlet Ray tracing is a very expensive feature even in hardware based form.

I prefer that performance translate into faster frames.

Yeah and Scoprio is basially missing all Vega feature so what is your point?

Avatar image for Grey_Eyed_Elf
Grey_Eyed_Elf

7970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#57 Grey_Eyed_Elf
Member since 2011 • 7970 Posts

@tormentos: Its pretty much the reason why console options will be here to stay... I assume most games will offer a performance mode without Ray Tracing and a pretty mode with Ray Tracing with a 30FPS lock with Dynamic resolutions.

Also first party games made specifically for those games not to mention that there are 3 games that actually use Ray Tracing by the time the consoles are here development and the use of Ray Tracing will change.

The consoles still won't be as powerful as 2080's though, raw performance might not even match a RTX 2070 if they sacrifice CU's for Ray Tracing cores.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58  Edited By tormentos
Member since 2003 • 33784 Posts

@Grey_Eyed_Elf said:

@tormentos: Its pretty much the reason why console options will be here to stay... I assume most games will offer a performance mode without Ray Tracing and a pretty mode with Ray Tracing with a 30FPS lock with Dynamic resolutions.

Also first party games made specifically for those games not to mention that there are 3 games that actually use Ray Tracing by the time the consoles are here development and the use of Ray Tracing will change.

The consoles still won't be as powerful as 2080's though, raw performance might not even match a RTX 2070 if they sacrifice CU's for Ray Tracing cores.

Yep i am a console lover but damn some people believe here than AMD can release a GPU with HB RT and that some how it will have less impact on performance in that GPU than on a RTX 2080Ti which is ridiculous.

The RTX 2080ti already show how big the impact on performance is even on low settings,i don't see other weaker GPU doing any better.

Avatar image for Grey_Eyed_Elf
Grey_Eyed_Elf

7970

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#59 Grey_Eyed_Elf
Member since 2011 • 7970 Posts

@tormentos said:
@Grey_Eyed_Elf said:

@tormentos: Its pretty much the reason why console options will be here to stay... I assume most games will offer a performance mode without Ray Tracing and a pretty mode with Ray Tracing with a 30FPS lock with Dynamic resolutions.

Also first party games made specifically for those games not to mention that there are 3 games that actually use Ray Tracing by the time the consoles are here development and the use of Ray Tracing will change.

The consoles still won't be as powerful as 2080's though, raw performance might not even match a RTX 2070 if they sacrifice CU's for Ray Tracing cores.

Yep i am a console lover but damn some people believe here than AMD can release a GPU with HB RT and that some how it will have less impact on performance in that GPU than on a RTX 2080Ti which is ridiculous.

The RTX 2080ti already show how big the impact on performance is even on low settings,i don't see other weaker GPU doing any better.

Fanboys will be fanboys... And I wouldn't have faith in any AMD GPU since even with their 7nm parts their TDP is horrible.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#60  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

You have posted old Battlefield V build benchmarks.

RX 5700 is missing Scorpio GPU feature LOL: Hint: starting with the letter V.

1. You need to stop the benchmark i posted and your have not even 3 weeks of difference by their own sites.

2. BUt even taking yours as valid RT still delivers a fu**ing blow of more than 100% to performance of that 2080TI which i am sure neither the PS5 or Scarlet will have inside.

3. The PS5 nor Scarlet will get a GPU as powerful as the RTX 2080 they simply will not get one as powerful,so any performance hit shown here i am sure will apply to the PS5 and Scarlet Ray tracing is a very expensive feature even in hardware based form.

4. I prefer that performance translate into faster frames.

5. Yeah and Scoprio is basially missing all Vega feature so what is your point?

1. You're posted an old benchmark to misrepresent.

2 and 3. Who said I'm using RTX 2080 Ti? I stated RTX 2070 has 4K 36 fps low RTRT which is similar to typical game console's 30 hz. Frame rates can improve with Variable Shading Rate and Rapid Pack Maths.

4. It's up to the developers.

5. In terms of silicon maturity, X1X is based on Vega

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.
  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.
  • X1X GPU ROPS has multi-MB cache similar to Vega 56/64's ROPS being connected to multi-MB cache. This is missing in Polaris RX-480 and PS4 Pro.
  • Vega is still GCN. Intel's semi-custom Vega 24 doesn't have RPM and has superior perf/watt over Polaris counterpart.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

Try again.

Avatar image for Zero_epyon
Zero_epyon

20103

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#61 Zero_epyon
Member since 2004 • 20103 Posts
Loading Video...

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#63 ronvalencia
Member since 2008 • 29612 Posts

@Zero_epyon:

On Scarlet APU's physical parameters, it's Xbox Scorpio like pattern with new process node and CPU/GPU IP.

https://forum.beyond3d.com/threads/next-generation-hardware-speculation-with-a-technical-spin-post-e3-2019.61245/page-8

Alright broke out the measurement taping and found out some things about Navi and the Anaconda in terms of sizes on 7nm.

5700:

GDDR6 phy controller: 4.5mm x 8

Dual CU: 3.37mm x 20

4 ROP cluster: .55mm x 16

L1+L2+ACE+Gemotry processor+empty buffer spaces + etc: 139mm

Now Anaconda:

A rougher estimate using the 12x14mm GDDR6 chips next to the SOC.

370mm-390mm.

It's a bit bigger than the 1X SOC for sure.

If we use the figure of 380mm,

75mm for CPU

45mm for 10 GDDR6 controllers

8.8mm for ROPs

140mm for buses, caches, ACE, geometry processors, shape etc. I might be over estimating this part as the 5700 seems to have lots of "empty" areas.

We have ~110mm left for CUs + RT hardware. There is enough there for ~30 dual CUs and RT extensions.

Conclusion:

The Anaconda SOC is around the minimum size you need to fit the maximum Navi GPU and Zen2 cores.

I expect Anaconda to have a minimum of 48 CUs if the secret sauce is extra heavy or 60CUs if the sauce is light.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#64  Edited By tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

1. You're posted an old benchmark to misrepresent.

2 and 3. Who said I'm using RTX 2080 Ti? I stated RTX 2070 has 4K 36 fps low RTRT which is similar to typical game console's 30 hz. Frame rates can improve with Variable Shading Rate and Rapid Pack Maths.

4. It's up to the developers.

5. In terms of silicon maturity, X1X is based on Vega

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.
  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.
  • X1X GPU ROPS has multi-MB cache similar to Vega 56/64's ROPS being connected to multi-MB cache. This is missing in Polaris RX-480 and PS4 Pro.
  • Vega is still GCN. Intel's semi-custom Vega 24 doesn't have RPM and has superior perf/watt over Polaris counterpart.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

Try again.

1-I posted a benchmark that was right and i found the very first one,your came just a few weeks latter and still proved my point,the drop in frames was substantial cut frames by more than half on a GPU which now retail for more than 11 hundred freaking dollars,and some how you think AMD is going to have it any better?

Your info corrected my but at the same time it destroyed your own point and validated mine RT is to expensive even with HB support.

2-The benchmark you posted was for 2080ti not for 2070.

3.Yes because it is so great to have games falling under 30FPS in favor of a damn effect,this is something that has plague console for genertions.

4-The XBO X lack vega's features it was just a damn huge polaris with lower clocked CU,it wasn't new or revolutionary it was just a beffee GPU.

Dude stop the only problem the Pro had is that is weak period nothing more nothing less, give more bandwidth to the PRO more ram,better cooling and hike those clock speed to match the RX580 and you would have the same results as scorpio give or take because the RX580 has beaten scorpio in many games using less CU and higher clock speed.

The PS4 Pro just happen to be low clocked for obvious reason,sony was aiming for a reasonable upgrade,for $100 more you got a GPU basically twice as powerful which was OK on xbox one side when the xbox one X landed the XBO was been sold for $250 which mean the XBO X price of admition for better graphics was subtancially higher which is why few people jumped in.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#65 tormentos
Member since 2003 • 33784 Posts

@Zero_epyon said:
Loading Video...

Some interesting things Richard claim MS hasn't confirm 8 cores CPU while sony has.

Another thing is while i understand flops measument is not 100 accurate if both GPU use Navi the flop count should be telling,because it is the same tech.

I also love how he claim 2.5X advantage on XBOX X resolution with just 45% more GPU power,when the PS4 did that to the xbox one as well with just 40% better GPU,in games like MGS 720p vs 1080p + the PS4 version had dynamic skies which the xbox one totally lacked.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#66  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@Zero_epyon said:

Some interesting things Richard claim MS hasn't confirm 8 cores CPU while sony has.

Another thing is while i understand flops measument is not 100 accurate if both GPU use Navi the flop count should be telling,because it is the same tech.

I also love how he claim 2.5X advantage on XBOX X resolution with just 45% more GPU power,when the PS4 did that to the xbox one as well with just 40% better GPU,in games like MGS 720p vs 1080p + the PS4 version had dynamic skies which the xbox one totally lacked.

X1X's RDR2 has 2X pixels count over PS4 Pro while the same RDR2 game scales by 2X pixels between PS4 Pro and PS4 which matched TFLOPS difference between PS4 and PS4 Pro..

Lower graphics pipeline latency is in vogue not just the TFLOPS argument.

XBO has ESRAM vs DDR3 programming difficulty curve issues.

For a budget box, I prefer GPU bias with enough Zen v2 CPU cores. For next gen games, MS's 120 fps PR indicates similar CPU latency and compute power goals. Notice the word indicate which is different from confirmed.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#67 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

X1X's RDR2 has 2X pixels count over PS4 Pro while the same RDR2 game scales by 2X pixels between PS4 Pro and PS4 which matched TFLOPS difference between PS4 and PS4 Pro..

Lower graphics pipeline latency is in vogue not just the TFLOPS argument.

XBO has ESRAM vs DDR3 programming difficulty curve issues.

For a budget box, I prefer GPU bias with enough Zen v2 CPU cores. For next gen games, MS's 120 fps PR indicates similar CPU latency and compute power goals. Notice the word indicate which is different from confirmed.

PS4 MSG has 2X Pixel over the xbox one + dynamic skies.

How much of a gap is that be HONNEST?

Now considering that you and MS spent many years defending a miserable 150MHZ difference in CPU,how BIG do you think the difference will be if the PS5 actually has 8 cores 16 threads vs Scarlet having 4 cores 8 threads?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#68  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

1. You're posted an old benchmark to misrepresent.

2 and 3. Who said I'm using RTX 2080 Ti? I stated RTX 2070 has 4K 36 fps low RTRT which is similar to typical game console's 30 hz. Frame rates can improve with Variable Shading Rate and Rapid Pack Maths.

4. It's up to the developers.

5. In terms of silicon maturity, X1X is based on Vega

Vega 64 LC has 295 watts for 13 TFLOPS boost clock or Vega 64 LC's 11.5 TFLOPS base clock

X1X GPU has ~150 watts for 6 TFLOPS base clock

Vega 64's TFLOPS vs TDP scales from X1X GPU's logic design maturity!

RX-580's 6 TFLOPS has 185 watt. Polaris 20 has inferior TFLOPS per watts when compared to X1X GPU and Vega 64

Polaris 10/20 can't scale to Vega 64's TFLOPS vs TDP ratios.

  • My Vega context for X1X is for Vega's TDP vs TFLOPS scaling.
  • X1X GPU has design concepts from Vega e.g. ROPS with multi-MB high speed cache. X1X GPU's 2MB L2 cache + 2MB render cache = Vega 56/64's 4 MB L2 cache.
  • X1X GPU ROPS has multi-MB cache similar to Vega 56/64's ROPS being connected to multi-MB cache. This is missing in Polaris RX-480 and PS4 Pro.
  • Vega is still GCN. Intel's semi-custom Vega 24 doesn't have RPM and has superior perf/watt over Polaris counterpart.

PS4 Pro GPU's 2.3X gain over 28 nm GCN is based on 1st gen Polaris electron leakage mitigation maturity.

Try again.

1-I posted a benchmark that was right and i found the very first one,your came just a few weeks latter and still proved my point,the drop in frames was substantial cut frames by more than half on a GPU which now retail for more than 11 hundred freaking dollars,and some how you think AMD is going to have it any better?

Your info corrected my but at the same time it destroyed your own point and validated mine RT is to expensive even with HB support.

2-The benchmark you posted was for 2080ti not for 2070.

3.Yes because it is so great to have games falling under 30FPS in favor of a damn effect,this is something that has plague console for genertions.

4-The XBO X lack vega's features it was just a damn huge polaris with lower clocked CU,it wasn't new or revolutionary it was just a beffee GPU.

5. Dude stop the only problem the Pro had is that is weak period nothing more nothing less, give more bandwidth to the PRO more ram,better cooling and hike those clock speed to match the RX580 and you would have the same results as scorpio give or take because the RX580 has beaten scorpio in many games using less CU and higher clock speed.

The PS4 Pro just happen to be low clocked for obvious reason,sony was aiming for a reasonable upgrade,for $100 more you got a GPU basically twice as powerful which was OK on xbox one side when the xbox one X landed the XBO was been sold for $250 which mean the XBO X price of admition for better graphics was subtancially higher which is why few people jumped in.

1. It's the price to pay for better graphics quality.

2. I posted 2080 Ti graph result , 2070 text number 4K results and direct article source link.

3. Battlefield V didn't use other speed up tricks like Turing's Rapid Pack Maths and Variable Shading Rate. VSR and DXR HW features comes with RDNA 2.

There's a potential to increase BVH search tree (geometry data for intersect) performance by moving 8 core Zen v2's fast 36 MB L3 cache (fast SRAM not slow ESRAM) with GPU's RT cores, hence lessen the external memory access rates.

This PC CPU's 36 MB L3 cache is use for large PC databases processed by the CPU, but the machine needs BVH search tree hardware to search in 36 MB L2 cache (which contains geometry data) instead of the CPU.

4. Too bad PS4 Pro didn't replicate equalization with RX-480. Your argument is debunked.

5. RDR2 shows X1X's graphics pipeline improvements over PS4 Pro when RDR2's pixel count scale by TFLOPS between PS4 to PS4 Pro. Your argument is debunked.

Define these many games at 4K resolution and without CPU bound arguments.

Try again.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#69  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

X1X's RDR2 has 2X pixels count over PS4 Pro while the same RDR2 game scales by 2X pixels between PS4 Pro and PS4 which matched TFLOPS difference between PS4 and PS4 Pro..

Lower graphics pipeline latency is in vogue not just the TFLOPS argument.

XBO has ESRAM vs DDR3 programming difficulty curve issues.

For a budget box, I prefer GPU bias with enough Zen v2 CPU cores. For next gen games, MS's 120 fps PR indicates similar CPU latency and compute power goals. Notice the word indicate which is different from confirmed.

PS4 MSG has 2X Pixel over the xbox one + dynamic skies.

How much of a gap is that be HONNEST?

Now considering that you and MS spent many years defending a miserable 150MHZ difference in CPU,how BIG do you think the difference will be if the PS5 actually has 8 cores 16 threads vs Scarlet having 4 cores 8 threads?

XBO's memory architecture is garbage.

Again, X1X's RDR2 has 2X pixels count over PS4 Pro while the same RDR2 game scales by 2X pixels between PS4 Pro and PS4 which matched TFLOPS difference between PS4 and PS4 Pro..

XBO RDR2 pixel count scales according to it's pixel count.

Avatar image for Pedro
Pedro

69353

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#70 Pedro
Member since 2002 • 69353 Posts

So much factual speculations. ;)