scatteh316's forum posts

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#1 scatteh316
Member since 2004 • 10273 Posts

@tormentos said:
@ronvalencia said:

Your Battlefield V RT benchmarks seems to be old

https://www.guru3d.com/news-story/december-4-battlefield-v-raytracing-dxr-performance-patch-released-(benchmarks).html

RTX 2080 Ti at 4K RT ultra reach 41 fps and ~70 fps at 1440p

RT consumes memory bandwidth which is being shared with shader cores.

RTX 2070 4K with DXR low lands on 36 fps.

Turing's rapid pack maths and variable shading rate features wasn't used.

remember this is an RTX 2080ti nor Scarlet of the PS5 will get something like this.

Remember, we do not know the performance impact of running RT on AMD hardware so stop using performance hit on Nvidia you mug.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#2  Edited By scatteh316
Member since 2004 • 10273 Posts

@ronvalencia said:
@scatteh316 said:
@Pedro said:
@scatteh316 said:

I know RT improves things....but as per my comment "I'm more then happy with the quality of reflections and lighting we currently have"

At least doing RT via shaders there's the option to just turn it off and use the power for everything...... an option not available with hardware acceleration.

I would of liked to of seen a voxel approach used next generation as opposed to RT as it's less resource heavy and would still give a nice improvement over current techniques...... it also would of been the perfect stop gap in the work up to RT.

But it will ultimately depend on how good the consoles are at RT....... if they have the performance to run it well then maybe...just maybe.....it'll be OK.

You would think that if RT is hardware accelerated that the core performance would be unaffected by its implementation but that is not the case. I reckon that Nvidia's current accelerated solution (which may also be AMDs) is too limited, relying on other components of the GPU thus the performance hit. The ideal situation is true hardware acceleration that runs parallel to the existing render pipeline.

In a sense it would be be affected by RT......but the overall raw performance of the die wouldn't be.

Say Microsoft start with a 40 CU Navi GPU and have to sacrifice 10 CU's for RT...... that'll be a 25% reduction in raw GPU performance but with this set-up running RT won't affect the utilisation of those remaining 30 CU's........ but turning off RT won't give more raw performance....But as long as the RT cores are fast enough to make up for the reduction in CU's it could actually work in their favour.

What's interesting is how much compute performance is going to be required for RT via shaders to match what will be possible via dedicated hardware.

If for example a next generation COD game ships with RT enabled via hardware acceleration on Scarlett it will still have 30 CU's worth of GPU performance to handle the rest of the graphics.

Now...... what if this same COD game ships on PS5 ?? There could be several scenario's......

1- They ditch RT completely and have inferior lighting/shadows/reflections (But still not ugly looking) but use all 40 CU's to push the visuals in other area's

2- They dedicate CU's to do RT via shaders.....but..... what if this requires 15 CU's to achieve parity with Scarletts dedicated hardware? That'll leave 25 CU's left for the rest of the graphics.....less then Scarlett!!!

If Scarlett and crucially AMD's implementation of hardware acceleration of RT is robust and performs well Scarlett may very well be the faster console in the real world.

Or has Sony or Cerny made custom tweaks to the core logic to enable better RT performance on PS5??

It's all very interesting and will be good to see how it all unfolds.

Scarlet APU has been shown to be 380 to 400 mm2

8 core Zen v2 chiplet + 5700 XT has 321 mm2.

TSMC 7nm+ has 20 percent increase density when compared to fake 7nm.

TSMC 1st gen 7nm is the BS 7nm.

Will you go away.... I was using 40 CU's just as an example to show CU count and how it could be affected by using dedicated RT logic.......I was not claiming they'll have 40 CU's and use the full fat die.

So bug off!

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#3  Edited By scatteh316
Member since 2004 • 10273 Posts

@tormentos said:
@scatteh316 said:

I don't remember inviting you to the conversation but I'll entertain you.

1. Consoles may offer better RT performance from being closed box systems or they might not.

2. The performance impact on Nvidia hardware is completely irrelevant as consoles are using AMD.

3. RT performance in games has improved with patches.

4. You know nothing (John snow) about the next generation machines to be able to comment on respective performance impacts let alone attempt to compare with hardware from a completely different vendor.

Now...moving on....

If you don't want to be quoted don't post here. :)

1-Yeah a console with a water down GPU from PC will some how have a better implementation of RT that the top dog on the industry. Make sense..lol

2-No is not as AMD GPU like Nvidia get performance hit from effects basically in the same way.

3-Not even close to the point were RT can be doable without a huge hit,take into account again that i posted the 2080ti the top dog of the industry and a GPU that i am sure will be stronger than any GPU inside the Ps5 or scarlet.

4-Neither do you so stop assuming bulshit metrics and trying to imply that a console with a water down version of an AMD gpu will beat the top dog on the industry,if AMD was that good it would be beating Nvidia for a long time now.

Tune down your expectations.

1. Consoles have always been able to extra better performance out of weaker hardware........... That is common knowledge.... and your lack of intelligence of grasping the meaning of a sentence is woefully apparent here.

2. Until Navi is released and benchmarked you are simply grasping at straws and chatting shit.

3. It has still improved, making those charts you posted completely irrelevant for your argument.

4. Lmao...typical reply from a clown who knows they lost........ I'm not assuming any metrics and never implied anything.

Turn down your bull shittery and turn up your common sense.

You are beaten again.....

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#4  Edited By scatteh316
Member since 2004 • 10273 Posts
@Pedro said:

@scatteh316: It would be interesting if AMD can go the route of a unified approach in the same manner they did with the Xbox 360 in that vertex and pixel shading now runs off a unified shading unit capable of doing either. So, in execution they can have a CU that can handle ray tracing as part of its general function.

I just watch a headache inducing video on ray tracing which covered the ray tracing pipeline using DXR. A pure ray tracing route is pretty computationally expensive, thus the utilization of hybrid in the current applications. If by some miracle (I have very little expectation of this transpiring) they can hybrid the CUs so that it enables both hardware ray tracing and traditional rasterization by re-using hardware components within the pipeline of each approach and re-route to specific hardware within the CU.

We will find out all the details for certain next year. It would be an interesting showdown.

That's how Microsoft DXR works so I imagine that's how AMD have implemented it......... after all Microsoft did develop DXR to run as a compute work load on purpose to allow it to work on hundreds of GPU's as any GPU that's capable of GPGPU can run DXR code. Although whether they have the performance for it or not is another matter altogether.......

Nvidia RTX at driver level just intercepts the DXR compute commands and converts them in to instructions the RT cores can work with.

It's going to be the hardware T 'n' L era all over again....but with RT :D

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#5  Edited By scatteh316
Member since 2004 • 10273 Posts

@tormentos said:
@scatteh316 said:

In a sense it would be be affected by RT......but the overall raw performance of the die wouldn't be.

Say Microsoft start with a 40 CU Navi GPU and have to sacrifice 10 CU's for RT...... that'll be a 25% reduction in raw GPU performance but with this set-up running RT won't affect the utilisation of those remaining 30 CU's........ but turning off RT won't give more raw performance....But as long as the RT cores are fast enough to make up for the reduction in CU's it could actually work in their favour.

What's interesting is how much compute performance is going to be required for RT via shaders to match what will be possible via dedicated hardware.

If for example a next generation COD game ships with RT enabled via hardware acceleration on Scarlett it will still have 30 CU's worth of GPU performance to handle the rest of the graphics.

Now...... what if this same COD game ships on PS5 ?? There could be several scenario's......

1- They ditch RT completely and have inferior lighting/shadows/reflections (But still not ugly looking) but use all 40 CU's to push the visuals in other area's

2- They dedicate CU's to do RT via shaders.....but..... what if this requires 15 CU's to achieve parity with Scarletts dedicated hardware? That'll leave 25 CU's left for the rest of the graphics.....less then Scarlett!!!

If Scarlett and crucially AMD's implementation of hardware acceleration of RT is robust and performs well Scarlett may very well be the faster console in the real world.

Or has Sony or Cerny made custom tweaks to the core logic to enable better RT performance on PS5??

It's all very interesting and will be good to see how it all unfolds.

That would still be problematic.

And we can see scenarios were even HB RT is to much of a hit to be implemented,even at low preset.

This is on a RTX 2080 Ti neither the PS5 or Scarlet will get something like this GPU,and you can see how even on low preset the sting is nasty to performance.

HB RT doesn't mean there is no cost to performance,but that is lower,as we can see we are not even close to reach a point were RT can be implemented without taking a huge hit,if sony manage to get stronger hardware in exchage for shader based RT i would not even use it if i were sony,this is an RTX from the top GPU maker in the industry,imagine how scarlet or PS5 would run this,the level of sacrifice that would be need it is simply to much.

For now i believe RT should be off the table rather than trying to force it as the next best thing while performance goes to hell and beyond.

I don't remember inviting you to the conversation but I'll entertain you.

1. Consoles may offer better RT performance from being closed box systems or they might not.

2. The performance impact on Nvidia hardware is completely irrelevant as consoles are using AMD.

3. RT performance in games has improved with patches.

4. You know nothing (John snow) about the next generation machines to be able to comment on respective performance impacts let alone attempt to compare with hardware from a completely different vendor.

Now...moving on....

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#6  Edited By scatteh316
Member since 2004 • 10273 Posts

@Grey_Eyed_Elf said:
@scatteh316 said:
@pc_rocks said:

Way to p*ss all over consolites hopes and dreams. Finally a sensible dev who is calling for things for what they are rather than pure sensation.

If consoles went custom again PC would die........

Very true... It would make multi-platform games near impossible to run on low-medium PC hardware due to the architectural differences.

That said if they did go full custom you would never see a $399 console ever again. Even these coming will more than likely come in at $499.

Yep...bang on!! Custom hardware is exciting...... but is costly to develop and reduces the amount of porting options to other platforms which reduces the money developers can recoup...... which is bad for everyone!

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#7  Edited By scatteh316
Member since 2004 • 10273 Posts

@Pedro said:
@scatteh316 said:

I know RT improves things....but as per my comment "I'm more then happy with the quality of reflections and lighting we currently have"

At least doing RT via shaders there's the option to just turn it off and use the power for everything...... an option not available with hardware acceleration.

I would of liked to of seen a voxel approach used next generation as opposed to RT as it's less resource heavy and would still give a nice improvement over current techniques...... it also would of been the perfect stop gap in the work up to RT.

But it will ultimately depend on how good the consoles are at RT....... if they have the performance to run it well then maybe...just maybe.....it'll be OK.

You would think that if RT is hardware accelerated that the core performance would be unaffected by its implementation but that is not the case. I reckon that Nvidia's current accelerated solution (which may also be AMDs) is too limited, relying on other components of the GPU thus the performance hit. The ideal situation is true hardware acceleration that runs parallel to the existing render pipeline.

In a sense it would be be affected by RT......but the overall raw performance of the die wouldn't be.

Say Microsoft start with a 40 CU Navi GPU and have to sacrifice 10 CU's for RT...... that'll be a 25% reduction in raw GPU performance but with this set-up running RT won't affect the utilisation of those remaining 30 CU's........ but turning off RT won't give more raw performance....But as long as the RT cores are fast enough to make up for the reduction in CU's it could actually work in their favour.

What's interesting is how much compute performance is going to be required for RT via shaders to match what will be possible via dedicated hardware.

If for example a next generation COD game ships with RT enabled via hardware acceleration on Scarlett it will still have 30 CU's worth of GPU performance to handle the rest of the graphics.

Now...... what if this same COD game ships on PS5 ?? There could be several scenario's......

1- They ditch RT completely and have inferior lighting/shadows/reflections (But still not ugly looking) but use all 40 CU's to push the visuals in other area's

2- They dedicate CU's to do RT via shaders.....but..... what if this requires 15 CU's to achieve parity with Scarletts dedicated hardware? That'll leave 25 CU's left for the rest of the graphics.....less then Scarlett!!!

If Scarlett and crucially AMD's implementation of hardware acceleration of RT is robust and performs well Scarlett may very well be the faster console in the real world.

Or has Sony or Cerny made custom tweaks to the core logic to enable better RT performance on PS5??

It's all very interesting and will be good to see how it all unfolds.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#8 scatteh316
Member since 2004 • 10273 Posts

Price won't be a problem if what they're offering for said price is good value.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#9 scatteh316
Member since 2004 • 10273 Posts

@Random_Matt said:

Yep, always said consoles will be worse off using off the shelve parts and becoming weak PC. At least the switch is different in that respect.

Consoles had to go that way......... With diminishing returns now on new hardware and the cost of designing a GPU/CPU logic circuit being simply too high now to be worth while.

The generation after next (So PS6) will potentially be an even smaller jump as hardware performance jumps are very small.

The golden era of 10x (or higher) the power increase between generations is long gone.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

36

Followers

Reviews: 0

User Lists: 0

#10 scatteh316
Member since 2004 • 10273 Posts

@Pedro said:

@scatteh316: You are aware that ray tracing will have a bigger effect on two of the three things you will like the "extra" power on, that being shaders and fidelity. Ray tracing objectively improves both in a manner that cannot be achieved through traditional means.

I know RT improves things....but as per my comment "I'm more then happy with the quality of reflections and lighting we currently have"

At least doing RT via shaders there's the option to just turn it off and use the power for everything...... an option not available with hardware acceleration.

I would of liked to of seen a voxel approach used next generation as opposed to RT as it's less resource heavy and would still give a nice improvement over current techniques...... it also would of been the perfect stop gap in the work up to RT.

But it will ultimately depend on how good the consoles are at RT....... if they have the performance to run it well then maybe...just maybe.....it'll be OK.