The top-end RTX GPU struggles to hit 60fps 1008p(Update)People areMADabout NVIDIA RTX... jayz 2 cents think this is why

  • 135 results
  • 1
  • 2
  • 3
Avatar image for creepywelps
Creepywelps

2964

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#101 Creepywelps
Member since 2015 • 2964 Posts

@GiveMeSomething: The 2070 will be slightly faster than a 1080. A 2080 will be slightly faster than a 1080ti. And the 2080ti will be easily the strongest GPU on the market. I fail to see how its a downgrade. You can say you expected a larger jump in performance if you'd like, but calling it a downgrade is completely false.

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#102 GarGx1
Member since 2011 • 10934 Posts

Lol, people going off the rails over a new GPU with completely new tech and under developed drivers, running a game that isn't finished.

Avatar image for davillain
DaVillain

56105

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#103  Edited By DaVillain  Moderator
Member since 2014 • 56105 Posts

@GarGx1 said:

Lol, people going off the rails over a new GPU with completely new tech and under developed drivers, running a game that isn't finished.

That's what I been saying in another thread. This isn't me about defending Nvidia, but this is of course a new GPU we're all talking about. We still gotta wait for actual Benchmarks before writing it off. If the benchmarks proves to be great and surpassing the 1080Ti, I'll consider upgrading next year of course when the hype is dying down and hopefully the prices will be reasonable for a purchase.

I'm just remaining optimistic on the whole RTX.

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#104 GarGx1
Member since 2011 • 10934 Posts
@davillain- said:
@GarGx1 said:

Lol, people going off the rails over a new GPU with completely new tech and under developed drivers, running a game that isn't finished.

That's what I been saying in another thread. This isn't me about defending Nvidia, but this is of course a new GPU we're all talking about. We still gotta wait for actual Benchmarks before writing it off. If the benchmarks proves to be great and surpassing the 1080Ti, I'll consider upgrading next year of course when the hype is dying down and hopefully the prices will be reasonable for a purchase.

I'm just remaining optimistic on the whole RTX.

I'm certainly not going to rush out an preorder one of those cards, that's for sure, but I'm not going to dismiss them off hand because of some random guys tweet at a reveal conference.

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#105 slimdogmilionar
Member since 2014 • 1343 Posts

Ray tracing is new and taxes the gpu, it takes common sense to know this. Not buying a 2070 or 2080 for ray tracing. I’m buying it so I can run 4K 60 on a single card.

I’m not gonna buy at launch probably wait a few months and just upgrade cpu and mobo for now, then around Xmas I’ll try and pick one up with my bonus.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#106 N64DD
Member since 2015 • 13167 Posts

Can anybody agree with me it's a good time for AMD to come in and fill the void?

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#107  Edited By Juub1990
Member since 2013 • 12620 Posts
@n64dd said:

Can anybody agree with me it's a good time for AMD to come in and fill the void?

It always is but seems AMD can't be winning both the CPU and GPU fronts. If it competes in one, it's losing the other. Kinda understandable because Intel and NVIDIA operate in just one of these markets and are both richer than AMD. Once upon a time it was AMD/Intel. ATI/NVIDIA.

Avatar image for schu
schu

10191

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#108  Edited By schu
Member since 2003 • 10191 Posts

The benchmark numbers look really good. I have no clue what people are bitching about TBH.

The RTX series does 50% better in 4k using current rendering methods and does 75% better using DLSS.

Seems well worth it in my book. Not sure what planet people are living on where that is bad.

I agree the prices are too high, but people are willing to pay it, so what are you going to do?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#109 ronvalencia
Member since 2008 • 29612 Posts

@jasonofa36 said:

@ronvalencia: Okay? But I don't get any of what you just said.

What's the problem?

GPUs are made with functional blocks e.g.

1. Geometry. Rasterization mass floating point to integer pixel conversion units

2. Compute Shader units, The TFLOPS marketing, which are useless without read/write units.

3. Texture read/write units, texture blending units. Collectively, these are TMU (texture management units), Compute Shader usually uses this read/write path.

4. Graphics read/write units (ROPS), graphics blending units (ROPS). Collectively, these are ROPS (raster operation units), Pixel Shader usually uses this read/write path. This is the classic GPU hardware.

Workloads like crypto currency uses point 2 path. Vega 64 is competitive with this path against GP102 (GeForce GTX 1080 Ti). AMD GPU optimized games are bias towards this path.

Traditional gaming workloads are bias towards point 1 and point 4 paths. This is where Vega 64 is inferior to GTX 1080 Ti i.e. Vega 64 has GP104 (GTX 1080) level hardware with GP102 level compute hardware.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#110 ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@ronvalencia said:
@pc_rocks said:
@ronvalencia said:

@Addict187: It's the first generation RTX design.

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

PS4's Killzone Shadow Fall has real-time ray-traced reflections via AMD GCN/HSA near metal.

It's called LRL which Nvidia also talked about in their keynote as well as BFV devs. Secondly no, KZ:SF wasn't the first to do it, Crysis 2 did it in 2011 and LRL are not really ray traced in the traditional sense.

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

"What we do on-screen for every pixel we run a proper ray-tracing - or ray-marching - step. We find a reflection vector, we look at the surface.

My argument wasn't about being the first.

False. They did exactly what Crytek did with Crysis 2. Like I said it's Realtime Local Reflectons, they did nothing new. It's another pathetic attempt by GG/Sony devs to give an impression they are doing something new. The other article about it later corrected that as well and gave Crysis 2 the credit it deserved. And I stand corrected, it's not ray-tracing in the traditional sense because it only reflects what's on the screen, it doesn't reflect anything not currently in the scene. Nvidia, BF V pointed out the same thing. Hell DF in their early RTX demo video said the same thing.

Realtime Local Reflections

Real-time, accurate reflections are one of the most demanding effects in modern-day game engines. In past versions of Crytek’s CryEngine, the options have been limited to planar reflections, as used for water, and cube maps, an extremely old technique that is only capable of producing low-resolution, poorly defined reflections that can’t be recursively reflected.

For the DirectX 11 Ultra Upgrade Crytek has implemented Realtime Local Reflections, which approximate ray-traced High Dynamic Range reflections, the technique used by Pixar and co. to ensure absolute accuracy when rendering their animated movie scenes. The approximated Realtime Local Reflections are able to self-reflect and reflect images from other surfaces also, and are of course fast enough to be rendered in real-time on modern-day technology (ray-traced reflections render at only a few frames per second on the most powerful of systems).

This upgrade is best demonstrated by the first example below, in which the soldier and his surroundings are reflected in the panel propped up against the wall, and the glossy floor. In the second example the entire scene is reflected to a high degree of accuracy.

Source

False, Your Crytek link doesn't mention the actual method for ray trace approximation reflection.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#112  Edited By ronvalencia
Member since 2008 • 29612 Posts

@jasonofa36 said:

@schu: That's true. It sucks that Nvidia beat AMD with this, and now they're basically pricing it the way they like just cause there's no competition.

https://www.techpowerup.com/gpudb/3268/radeon-pro-vega-20

AMD Radeon Pro Vega 20 has 2Ghz boost mode clock speed which is 30 percent higher than the old Vega 64. It's compute units and classic GPU hardware performance is 30 percent higher due to clock speed overclock. Without factoring 1TB/s memory bandwidth, it's performance estimation is somewhere GTX 1080 Ti to near Titan V levels.

NAVI has GDDR6 for mainstream gaming markets. AMD CPU engineers are helping RTG engineers for higher clock speed.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#113  Edited By ronvalencia
Member since 2008 • 29612 Posts

@gamecubepad said:

@ronvalencia:

If ROPs are the answer how does the RTX 2080 do so well with only 64? Seemingly faster than a 1080ti with less ROPs, less CUDA cores, less TMUs, and less RAM.

I love AMD and their products serve me well for the price, but I just don't think they have the cash or the people keep up with Nvidia and fight Intel right now. Ryzen is amazing. Polaris 10 was Good. Hopefully Navi will make some waves.

NVIDIA has superior delta color compression with it's ROPS.

Pascal's low hanging fruit is L2 cache bandwidth. Hawaii's 1MB L2 cache has 1 TB/s bandwidth(not connected to ROPS) while GTX 1080 Ti's 2.88MB (2883584 bytes) L2 cache is just above 1 TB/s (connected to ROPS). NVIDIA could improve this section.

TU104 has 4 MB L2 cache(?) while TU102 has 6 MB L2 cache. GP104 has 2MB L2 cache. TU's tile cache rendering function could been improved.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#114 Juub1990
Member since 2013 • 12620 Posts
@ronvalencia said:

NVIDIA has superior delta color compression with it's ROPS.

Pascal's low hanging fruit is L2 cache bandwidth. Hawaii's 1MB L2 cache has 1 TB/s bandwidth(not connected to ROPS) while GTX 1080 Ti's 2.88MB (2883584 bytes) L2 cache is just above 1 TB/s (connected to ROPS). NVIDIA could improve this section.

TU104 has 4 MB L2 cache(?) while TU102 has 6 MB L2 cache. GP104 has 2MB L2 cache. TU's tile cache rendering function could been improved.

Dude stop it, you're a fraud.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#115  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:
@ronvalencia said:

NVIDIA has superior delta color compression with it's ROPS.

Pascal's low hanging fruit is L2 cache bandwidth. Hawaii's 1MB L2 cache has 1 TB/s bandwidth(not connected to ROPS) while GTX 1080 Ti's 2.88MB (2883584 bytes) L2 cache is just above 1 TB/s (connected to ROPS). NVIDIA could improve this section.

TU104 has 4 MB L2 cache(?) while TU102 has 6 MB L2 cache. GP104 has 2MB L2 cache. TU's tile cache rendering function could been improved.

Dude stop it, you're a fraud.

You're the real fraud. I'm just running CUDA memory benchmarks fool.

Hint: GTX 980 Ti has inferior (~2/3 from R9-290X) L2 cache bandwidth when compared to R9-290X at 1Ghz, but AMD GPU can't use it for ROPS read/write. My GTX 1080 Ti has overclock version results with L2 cache bandwidth. DDC is not use for crypto currency.

You're the fuking fraud.

Avatar image for Addict187
Addict187

1128

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#116 Addict187
Member since 2008 • 1128 Posts
Loading Video...

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#117 ronvalencia
Member since 2008 • 29612 Posts
@Juub1990 said:
@ronvalencia said:

NVIDIA has superior delta color compression with it's ROPS.

Pascal's low hanging fruit is L2 cache bandwidth. Hawaii's 1MB L2 cache has 1 TB/s bandwidth(not connected to ROPS) while GTX 1080 Ti's 2.88MB (2883584 bytes) L2 cache is just above 1 TB/s (connected to ROPS). NVIDIA could improve this section.

TU104 has 4 MB L2 cache(?) while TU102 has 6 MB L2 cache. GP104 has 2MB L2 cache. TU's tile cache rendering function could been improved.

Dude stop it, you're a fraud.

https://wccftech.com/nvidia-geforce-rtx-2080-turing-tu104-gpu-pcb-exposed/

WCCftech claims TU104 has 4MB L2 cache which is twice of GP104's 2MB L2 cache.

https://gist.github.com/nelson-liu/623eb54d977c98db005eaf2fbc449238 has data on GTX 1080 Ti's L2 cache size which is 2883584 bytes or 2.88 MB. DCC can yield up to twice effective storage, but it's not use for crypto compute.

You f*koff.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#118 Juub1990
Member since 2013 • 12620 Posts

@ronvalencia: You already quoted me lol. Is it because you were lying the first time and got caught with your pants down? Stop lying and pretending you know what you’re talking about.

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#119 JasonOfA36
Member since 2016 • 3725 Posts

@Juub1990: I don't get any of what he says and I don't think any of it matters in raytracing or something.

Avatar image for davillain
DaVillain

56105

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#120 DaVillain  Moderator
Member since 2014 • 56105 Posts

@Addict187 said:
Loading Video...

JayzTwoCents isn't wrong. Just wait for the reviews, benchmarks, and don't pre-order at all!

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#121 Juub1990
Member since 2013 • 12620 Posts

@jasonofa36: Because he’s saying nonsense as usual.

Avatar image for Phreek300
Phreek300

672

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#122  Edited By Phreek300
Member since 2007 • 672 Posts

I will most likely be skipping this generation of cards. The ray tracing techniques are in their infancy and the jump in rasterization is not enough from my 1080TI is just not there. The prices are just insane as well. I hope the 3080TI will be better. Hell, at this point I am waiting to see what Radeon brings to the table. Most likely not enough to make me jump. But hey, one could hope.

Avatar image for superbuuman
superbuuman

6400

Forum Posts

0

Wiki Points

0

Followers

Reviews: 14

User Lists: 0

#123 superbuuman
Member since 2010 • 6400 Posts
@n64dd said:

Can anybody agree with me it's a good time for AMD to come in and fill the void?

That's whats needed anyway...at least we will have Intel also soon ...triforce! :P

Avatar image for pc_rocks
PC_Rocks

8471

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#124 PC_Rocks
Member since 2018 • 8471 Posts

@ronvalencia said:
@pc_rocks said:
@ronvalencia said:
@pc_rocks said:
@ronvalencia said:

@Addict187: It's the first generation RTX design.

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

PS4's Killzone Shadow Fall has real-time ray-traced reflections via AMD GCN/HSA near metal.

It's called LRL which Nvidia also talked about in their keynote as well as BFV devs. Secondly no, KZ:SF wasn't the first to do it, Crysis 2 did it in 2011 and LRL are not really ray traced in the traditional sense.

https://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-shadow-fall

"What we do on-screen for every pixel we run a proper ray-tracing - or ray-marching - step. We find a reflection vector, we look at the surface.

My argument wasn't about being the first.

False. They did exactly what Crytek did with Crysis 2. Like I said it's Realtime Local Reflectons, they did nothing new. It's another pathetic attempt by GG/Sony devs to give an impression they are doing something new. The other article about it later corrected that as well and gave Crysis 2 the credit it deserved. And I stand corrected, it's not ray-tracing in the traditional sense because it only reflects what's on the screen, it doesn't reflect anything not currently in the scene. Nvidia, BF V pointed out the same thing. Hell DF in their early RTX demo video said the same thing.

Realtime Local Reflections

Real-time, accurate reflections are one of the most demanding effects in modern-day game engines. In past versions of Crytek’s CryEngine, the options have been limited to planar reflections, as used for water, and cube maps, an extremely old technique that is only capable of producing low-resolution, poorly defined reflections that can’t be recursively reflected.

For the DirectX 11 Ultra Upgrade Crytek has implemented Realtime Local Reflections, which approximate ray-traced High Dynamic Range reflections, the technique used by Pixar and co. to ensure absolute accuracy when rendering their animated movie scenes. The approximated Realtime Local Reflections are able to self-reflect and reflect images from other surfaces also, and are of course fast enough to be rendered in real-time on modern-day technology (ray-traced reflections render at only a few frames per second on the most powerful of systems).

This upgrade is best demonstrated by the first example below, in which the soldier and his surroundings are reflected in the panel propped up against the wall, and the glossy floor. In the second example the entire scene is reflected to a high degree of accuracy.

Source

False, Your Crytek link doesn't mention the actual method for ray trace approximation reflection.

Ummm....what? They explicitly mentioned 'approximate ray-traced HDR reflections'. LMAO, the DC. Fact is RLR is universally calculated like that, GG did nothing new, the only difference is Crytek didn't really hype it as ray-tracing. Sony did to woo their brain dead fanbase in thinking they are doing something new. That exactly what happened, cows begin bragging about it as 'ray-tracing' without knowing who did it first and that it existed for years.

Here's an article that caught into Sony's hype machine but later corrected it. Actually Crysis 2/3 implementation was more advanced because it featured actual area and volumetric lights unlike KZ:SF that said they will use area lights but the actual game uses little to no area lights.

Now coming back to the word 'ray-tracing', neither Crysis 2/3 nor KZ:SF's implementation is 'ray-tracing' in a traditional sense, you're using rays in screen space not in world space. Rays have been in use in games for decades examples includes shooting ray to find out the line of sight for NPC's, occlusion, distance between NPC's and Player Character.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#125 N64DD
Member since 2015 • 13167 Posts

@superbuuman said:
@n64dd said:

Can anybody agree with me it's a good time for AMD to come in and fill the void?

That's whats needed anyway...at least we will have Intel also soon ...triforce! :P

I loved the AMD/nvidia era

Even though the amd equal to the 6800 ultra was solid.

Avatar image for tgob89
tgob89

2153

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#126 tgob89
Member since 2017 • 2153 Posts

@recloud said:

$1600 for this?

PS5 it is!

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#127  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: You already quoted me lol. Is it because you were lying the first time and got caught with your pants down? Stop lying and pretending you know what you’re talking about.

https://videocardz.com/newz/nvidia-upgrades-l1-and-l2-caches-for-turing

NVIDIA improved L1 and L2 cache for Turing GPUs e.g.

L1 cache bandwidth improved by 2X,

L2 cache storage improved by 2X.

I'm right and you're wrong. I'm right with NVIDIA going after Pascal's low hanging fruit i.e. cache.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#128  Edited By ronvalencia
Member since 2008 • 29612 Posts

@n64dd said:
@superbuuman said:
@n64dd said:

Can anybody agree with me it's a good time for AMD to come in and fill the void?

That's whats needed anyway...at least we will have Intel also soon ...triforce! :P

I loved the AMD/nvidia era

Even though the amd equal to the 6800 ultra was solid.

https://wccftech.com/amd-confirms-new-7nm-radeon-graphics-cards-launching-in-2018/

Radeon Vega 20 gaming card with 20 TFLOPS FP32. Without CU count increase, this Vega 20 would need about 2.45 Ghz clock speed which also improves the classic GPU hardware by about 56 percent.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#129 Juub1990
Member since 2013 • 12620 Posts
@ronvalencia said:

https://videocardz.com/newz/nvidia-upgrades-l1-and-l2-caches-for-turing

NVIDIA improved L1 and L2 cache for Turing GPUs e.g.

L1 cache bandwidth improved by 2X,

L2 cache storage improved by 2X.

I'm right and you're wrong. I'm right with NVIDIA going after Pascal's low hanging fruit i.e. cache.

Took you 10 days to come up with a response. Pathetic lol.

Avatar image for howmakewood
Howmakewood

7702

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#130 Howmakewood
Member since 2015 • 7702 Posts

wccftech links should be banned.

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#131  Edited By Shewgenja
Member since 2009 • 21456 Posts

I dont understand this new breed of PC gamers whatsoever. Crysis was a slideshow on the 8000 series but everyone was excited for DX10 at the time despite that. Cards got better and games ran better. This isn't a new thing, but seeing gamers revolt against new tech sure af is.

I think this is more about the price of cards than it is about the tech. High end cards are priced to the moon these days, so gamers make impossible demands on them. It's a bad situation that could stunt progress.

Everyone who says "I just want high framerates" is going to make technological progress slow to a crawl.

Avatar image for kali-b1rd
Kali-B1rd

2241

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#132  Edited By Kali-B1rd
Member since 2018 • 2241 Posts

@Shewgenja said:

I dont understand this new breed of PC gamers whatsoever. Crysis was a slideshow on the 8000 series but everyone was excited for DX10 at the time despite that. Cards got better and games ran better. This isn't a new thing, but seeing gamers revolt against new tech sure af is.

I think this is more about the price of cards than it is about the tech. High end cards are priced to the moon these days, so gamers make impossible demands on them. It's a bad situation that could stunt progress.

Everyone who says "I just want high framerates" is going to make technological progress slow to a crawl.

Agreed.

If it needs to be done it needs to be done.

But, given the current climate where only now GPU prices are where they were TWO YEARS AGO, and RAM is still extortionate, I would hope gaming companies would of at least thrown a "catch up" bone to all those people that simple can't put that kinda money down on a graphics card.

I think it was the wrong time to go "Oh hey, know that GPU drought? well we are going to make it last longer by stagnating old card prices and selling new £1000+ cards with new features and low performance gains per £.

I honestly shocked if PC Gaming Growth hasn't stalled in the last 1-2 years based on everything going on.

I'l just stick to the 1080 TI until these Ray Tracing cards mature.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#133 N64DD
Member since 2015 • 13167 Posts

@ronvalencia: Everything you just said is meaningless.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#134 ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:
@ronvalencia said:

https://videocardz.com/newz/nvidia-upgrades-l1-and-l2-caches-for-turing

NVIDIA improved L1 and L2 cache for Turing GPUs e.g.

L1 cache bandwidth improved by 2X,

L2 cache storage improved by 2X.

I'm right and you're wrong. I'm right with NVIDIA going after Pascal's low hanging fruit i.e. cache.

Took you 10 days to come up with a response. Pathetic lol.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#135  Edited By ronvalencia
Member since 2008 • 29612 Posts

@n64dd said:

@ronvalencia: Everything you just said is meaningless.

Everything you just said is useless.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#136  Edited By ronvalencia
Member since 2008 • 29612 Posts

@howmakewood said:

wccftech links should be banned.

https://www.techpowerup.com/gpudb/3233/radeon-instinct-vega

Radeon Instinct Vega 20 already has 2025 Mhz clock speed at ~225 watts. That's ~32 percent clock speed increase from Vega 64's 1536 Mhz at ~295 watts.

Classic GPU hardware performance increased by 32 percent along with CU's TFLOPS via clock speed increase.

AMD switched from GoFlo's 14 nm LPP Vega 10 into TSMC's 7 nm LPP Vega 20.

225 watts --> 295 watts = 31 percent increase

225 watts --> 400 watts = 77 percent increase

2025 Mhz --> 2445 mhz = 20 percent increase.

RX Vega 64's TFLOPS increase should have been tied to actual memory bandwidth increase instead of relying on tile cache rendering and delta color compression improvements with Fury X's memory bandwidth.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#137 N64DD
Member since 2015 • 13167 Posts
@ronvalencia said:
@n64dd said:

@ronvalencia: Everything you just said is meaningless.

Everything you just said is useless.

Haven't you been absolutely owned multiple times and left the forums in small increments in time?

You post graphs, numbers, stats and nothing every comes to fruition.

I'll just wait until you exile yourself again.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#138  Edited By ronvalencia
Member since 2008 • 29612 Posts

@n64dd said:
@ronvalencia said:
@n64dd said:

@ronvalencia: Everything you just said is meaningless.

Everything you just said is useless.

Haven't you been absolutely owned multiple times and left the forums in small increments in time?

You post graphs, numbers, stats and nothing every comes to fruition.

I'll just wait until you exile yourself again.

BS, Xbox One X has near Vega style ROPS design i.e. 2MB tile cache rendering and there are game results rivaling GTX 1070 and higher than RX-580/R9-390X OC.

Vega 56/64 has unified 4MB L2 cache for CU and ROPS

X1X's GPU has 2MB L2 cache for CU and 2MB render cache for ROPS (different design from Polaris uArch). Tile render from CU to ROPS capable. Polaris can only tile render via CU not ROPS.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#139 N64DD
Member since 2015 • 13167 Posts
@ronvalencia said:
@n64dd said:
@ronvalencia said:
@n64dd said:

@ronvalencia: Everything you just said is meaningless.

Everything you just said is useless.

Haven't you been absolutely owned multiple times and left the forums in small increments in time?

You post graphs, numbers, stats and nothing every comes to fruition.

I'll just wait until you exile yourself again.

BS, Xbox One X has near Vega style ROPS design i.e. 2MB tile cache rendering and there are game results rivaling GTX 1070 and higher than RX-580/R9-390X OC.

Vega 56/64 has unified 4MB L2 cache for CU and ROPS

X1X's GPU has 2MB L2 cache for CU and 2MB render cache for ROPS (different design from Polaris uArch). Tile render from CU to ROPS capable. Polaris can only tile render via CU not ROPS.

LOL you're still going on that.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#140  Edited By ronvalencia
Member since 2008 • 29612 Posts

@n64dd said:
@ronvalencia said:
@n64dd said:
@ronvalencia said:

Everything you just said is useless.

Haven't you been absolutely owned multiple times and left the forums in small increments in time?

You post graphs, numbers, stats and nothing every comes to fruition.

I'll just wait until you exile yourself again.

BS, Xbox One X has near Vega style ROPS design i.e. 2MB tile cache rendering and there are game results rivaling GTX 1070 and higher than RX-580/R9-390X OC.

Vega 56/64 has unified 4MB L2 cache for CU and ROPS

X1X's GPU has 2MB L2 cache for CU and 2MB render cache for ROPS (different design from Polaris uArch). Tile render from CU to ROPS capable. Polaris can only tile render via CU not ROPS.

LOL you're still going on that.

You continued "Haven't you been absolutely owned multiple times" as your "you're still going on that." Hypocrite.

GS forum is not mandatory.