Did the PC just take away the consoles' optimization advantage?

Avatar image for NoodleFighter
NoodleFighter

11796

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#1  Edited By NoodleFighter
Member since 2011 • 11796 Posts

With Nvidia announcing their resizable-BAR support to counter AMD's Smart Access Memory today it has made me think how PC hardware is a step or two ahead of the consoles in architecture efficiency now. We all thought the consoles IO decompression was unique to them only for Nvidia to announce RTX IO and DirectStorage.

7th generation consoles (Xbox 360, PS3 and Wii) and older had the optimization advantage in the past where PC hardware that was on par or even somewhat faster in performance would be unable to keep up with the consoles. But since the consoles moved to x86 and practically off the shelf PC parts the optimization advantage has rapidly decreased. PCs with hardware similar to what is in the consoles are able to run games at similar settings and performance throughout most of the 8th generation. The advantage got even smaller once DirectX 12 and Vulkan APIs were introduced which significantly reduced CPU overhead and allowed lower level programming. As Juub said in a similar thread about 2 months ago we noticed that optimization on consoles nowadays is less about efficiency in code and architecture and more about cutting back and tweaking visual effects exactly to what the consoles can handle. A prime example of this is Red Dead Redemption 2 on Xbox One X. In order for the game to run at 4k on the One X the game runs with several graphical settings on fidelity lower than PCs low settings and most of the rest on low and medium. Gamer Nexus even showed in their $500 GTX 1060 PC vs PS5 video that it was somewhat difficult to mimmick PS5 graphics of Dirt 5 on the PC version because somethings such as spectators were completely removed from the PS5 version while still being present on PC.

With the introduction of DLSS, RTX GPUs can play games at a lower internal resolution with the image quality of a higher resolution with the performance benefit of running at a lower internal resolution. For example in Watch Dogs Legion a RTX 2060 Super can play Xbox Series X settings (ray tracing on consoles is lower than PCs low) at native 4k above 30fps but with DLSS enabled that same GPU can then play the game on Ultra settings at above 30fps with hardly any sacrifice in resolution quality. The upcoming survival horror game The Medium is also going to support DLSS and according to the developers will have a RT quality mode that is exclusive to DLSS so that means it won't be on Xbox Series S|X or it will run at a low resolution and framerate.

With support for DLSS continuing to grow with adaption in major and minor titles console users can't really downplay it as something only a small handful of games will use. DLSS is a big game changer as it can even offer a 85% boost in framerate in COD Cold War and a 60% boost in Cyberpunk 2077. DLSS makes 4k 90+fps on high/max settings an actual possibility on PC. The PS5 and Xbox Series S|X don't have an answer to DLSS despite being capable of machine learning. The Xbox Series X machine learning performance is half that of an RTX 2060 it also doesn't help that the consoles don't have dedicated hardware for machine learning which means resources will be taken away from other task in order to do it so it won't get the same performance benefit as it would on an Nvidia RTX card which tensor cores for machine learning.

This video showing off AMD's SAM shows that depending on the game you can gain at least an 10% performance boost and going as high as 19%. We should expect similar results from Intel and Nvidia's Resizable-BAR support.

DLSS and Resizable-BAR together can create a wicked combo in PC performance boost.

Avatar image for deactivated-60bf765068a74
deactivated-60bf765068a74

9558

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2 deactivated-60bf765068a74
Member since 2007 • 9558 Posts

All I know is I've been doing quick resume on pc since the 1990's. I've had multiple games open and switched in between starcraft diablo 2 and half-life 2.

QUICK resume isn't new its not some thing that just came outa nowhere we've been doing it for over 20 years. Microsoft stop making up words for things people have been doing.

Avatar image for Willy105
Willy105

26098

Forum Posts

0

Wiki Points

0

Followers

Reviews: 19

User Lists: 0

#3 Willy105
Member since 2005 • 26098 Posts

Developers will always pay less attention to optimizing PC ports because the target audience for a lot of the games will simply brute force through it with whatever hardware is sold at the time.

Avatar image for mrbojangles25
mrbojangles25

58305

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#4  Edited By mrbojangles25
Member since 2005 • 58305 Posts

PC is always pushing the limits, and that includes optimization; I don't think console ever really had any sort of advantage in that department. This is why five year old PC's with GTX 900 or 1000 series cards still look and run better than consoles for the most part.

With that said, always looking forward to more optimization. Likewise, I look forward to what Nvidia has to put out soon.

@Willy105 said:

Developers will always pay less attention to optimizing PC ports because the target audience for a lot of the games will simply brute force through it with whatever hardware is sold at the time.

Exactly. Blame the developer and publisher, not the hardware.

Avatar image for hardwenzen
hardwenzen

38854

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#5 hardwenzen
Member since 2005 • 38854 Posts

Almost bought a 3070 for $730cad today. Now i ask myself why i didn't 🤷‍♂️

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#7 JasonOfA36
Member since 2016 • 3725 Posts

Nvidia, for the love of all things good, ADD DLSS ON TARKOV. The game uses bad TAA and performs bad on an even fairly good system.

Avatar image for deactivated-6092a2d005fba
deactivated-6092a2d005fba

22663

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#8 deactivated-6092a2d005fba
Member since 2015 • 22663 Posts

A case of tl;dr for me.

Avatar image for hardwenzen
hardwenzen

38854

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#9  Edited By hardwenzen
Member since 2005 • 38854 Posts

@jasonofa36 said:

Nvidia, for the love of all things good, ADD DLSS ON TARKOV. The game uses bad TAA and performs bad on an even fairly good system.

Its ridiculous. I am losing 20-30fps by simply using a scope on my 1070, and that's with everything on the lowest setting but draw distance, textures, lod, aa and af. I just redownloaded the game yesterday (haven't played since Eternal came out) and when i see this kind of drops in a 2016 game, i just want to stop playing until the game is fully released and optimized. But who am i kidding, BSG can't optimize for shit.

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#10 JasonOfA36
Member since 2016 • 3725 Posts

@hardwenzen said:
@jasonofa36 said:

Nvidia, for the love of all things good, ADD DLSS ON TARKOV. The game uses bad TAA and performs bad on an even fairly good system.

Its ridiculous. I am losing 20-30fps by simply using a scope on my 1070, and that's with everything on the lowest setting but draw distance, textures, lod, aa and af. I just redownloaded the game yesterday (haven't played since Eternal came out) and when i see this kind of drops in a 2016 game, i just want to stop playing until the game is fully released and optimized. But who am i kidding, BSG can't optimize for shit.

No truer words than this. Literally worse than CDPR.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#11 Juub1990
Member since 2013 • 12620 Posts

Console optimization doesn’t seem to be much of a thing anymore. One advantage they got is that the games almost always run properly whereas on PC, the devs sometimes botch it. If both are given proper care though, consoles don’t seem to be significantly ahead with comparable hardware anymore.

Avatar image for gtx021
gtx021

515

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#12 gtx021
Member since 2013 • 515 Posts

palit rtx 3090 in game.

1439 euros,

no one plays pc games

Avatar image for HalcyonScarlet
HalcyonScarlet

13664

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#14 HalcyonScarlet
Member since 2011 • 13664 Posts

"With Nvidia announcing their resizable-BAR support to counter AMD's Smart Access Memory today"

This isn't correct. It was never a counter, it's being done with AMDs help. The method as to how it is being done is identical. The reason AMD required specific hardware at first, was because that is what AMD has it running stable on. The plan is to roll it out on additional and older hardware in time. AMD started a new PC standard with this, much like Freesync. AMD is actively working with Nvidia and Intel with this.

But no, in terms of software optimizations, this doesn't take away any optimisation abilities of consoles. This is one hardware optimisation. Being set hardware, consoles have an advantage of more focused optimisation at about every level. When developers are able to rely on a single set of specifications, they can highly tune to software for it. In a way, it's a similar story with Apple and their new apple silicon the M1.

But in terms of hardware optimizations, this is great news for the PC. And hopefully more advances will come in the future.

Avatar image for SecretPolice
SecretPolice

44066

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#15 SecretPolice
Member since 2007 • 44066 Posts

The Pee Salty Seas are extra salty and turbulent this time of year. lol :P

Avatar image for osan0
osan0

17817

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16 osan0
Member since 2004 • 17817 Posts

i wouldn't go as far as to say take away but more close the gap.

but console and PC development has somewhat converged.

Console developers have moved away from the metal a bit. the games are not as sensitive to hardware changes as, say, a PS2 or PS3 game. for a bunch of reasons it just doesnt make sense to work at such a low level any more like the did on the likes of the PS2.

on the flip side AMD (at the behest of developers) kicked off a lower level API that gives developers closer access to the hardware. mantle has since been retired but its approach became the basis for vulkan and DX12. like a console, it means less hand holding at the API level and the developer needs to put in more work to get the same result. but it also gives them a lot more control and its an API far better suited to getting the most out of modern PC hardware.

As for resizable BAR, i wouldnt be surprised if that wasnt already in the PS4 and X1 (or they just set it to 8GB or something). its a part of the PCI-E spec since PCI-E 2.0 as far as i know. as far as i am aware the CPU and GPU still communicate over PCI-E in the current and last gen consoles even though its all on an SOC. so i would be surprised if MS and Sony didnt catch that inefficiency when making their consoles....especially when they are trying to claw back as much CPU time as possible.

Avatar image for R4gn4r0k
R4gn4r0k

46283

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#17  Edited By R4gn4r0k
Member since 2004 • 46283 Posts

@hardwenzen said:

Almost bought a 3070 for $730cad today. Now i ask myself why i didn't 🤷‍♂️

Yeah man you could've sold it and used that money to put two kids through graduation.

Avatar image for hardwenzen
hardwenzen

38854

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#18 hardwenzen
Member since 2005 • 38854 Posts

@R4gn4r0k said:
@hardwenzen said:

Almost bought a 3070 for $730cad today. Now i ask myself why i didn't 🤷‍♂️

Yeah man you could've sold it and used that money to put two kids through graduation.

Correction: five kids 😂

The gpu was from Asus and i kinda want EVGA for the amazing warranty.

Avatar image for R4gn4r0k
R4gn4r0k

46283

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#19 R4gn4r0k
Member since 2004 • 46283 Posts

@hardwenzen said:

Correction: five kids 😂

The gpu was from Asus and i kinda want EVGA for the amazing warranty.

+1 man, I'd rather just pick a card that's just right instead of getting the one I was able to click on fast enough xD

Oh well, I'm sure patience will be rewarded. I expect a 3070 super, 3080ti,... down the line.

Avatar image for pc_rocks
PC_Rocks

8471

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#20 PC_Rocks
Member since 2018 • 8471 Posts

There never was any optimization advantage to begin with. It was a myth perpetuated by developers and console makers to DC the superior PC hardware and throw a bone to consolites.

Avatar image for judaspete
judaspete

7270

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#21 judaspete
Member since 2005 • 7270 Posts

Well, my i5, 1650 combo may last me longer than I thought :)

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#22  Edited By 04dcarraher
Member since 2004 • 23829 Posts

Console optimization advantage started to go to the wayside back in 2011. Once Pc games/devs started to use Direct x 11 and started to leverage the hardware resources in a more correct manner, we started to see massive differences in performance and quality vs 360/PS3 era games. Because for the longest time PC was stuck using only one thread feeding gpu's "draw calls"(everything the gpu needs to render a frame). Hence the quotes from devs stating before hand consoles were able to surpass PC's hardware equivalency.

Now once AMD introduced Mantle back in 2013 it introduced an API was the same level of control as consoles. Once Dirext x 12 and Vulkan API using the same level of control it was over. The last two generations of consoles have been using same API's or similar based. The only thing a consoles have now is it's only a single set or a few sets hardware where dev's only have to code for, being able to squeeze as much a possible.

Another thing is that many "consoler's" fanboys before xbox one and PS4 era had the hardest time coming to terms that optimization also includes using lower quality assets and settings to reach a set target of performance vs PC.

Avatar image for silentchief
Silentchief

6865

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#23  Edited By Silentchief  Online
Member since 2021 • 6865 Posts

@mrbojangles25 said:

PC is always pushing the limits, and that includes optimization; I don't think console ever really had any sort of advantage in that department. This is why five year old PC's with GTX 900 or 1000 series cards still look and run better than consoles for the most part.

With that said, always looking forward to more optimization. Likewise, I look forward to what Nvidia has to put out soon.

@Willy105 said:

Developers will always pay less attention to optimizing PC ports because the target audience for a lot of the games will simply brute force through it with whatever hardware is sold at the time.

Exactly. Blame the developer and publisher, not the hardware.

Lol are you saying a 900 series and a 1000 series will beat the PS5 and XSX for the most part?

Avatar image for eoten
Eoten

8671

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#24 Eoten
Member since 2020 • 8671 Posts

Consoles never had an advantage. "optimization" meant reducing settings and decreasing frame rate so a game would actually run on console.

Avatar image for Gatygun
Gatygun

2709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#25 Gatygun
Member since 2010 • 2709 Posts

PC has already better optimization then consoles for a while now. Consoles are falling behind drastically for a good half decade now.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#26 Juub1990
Member since 2013 • 12620 Posts

@04dcarraher said:

Console optimization advantage started to go to the wayside back in 2011. Once Pc games/devs started to use Direct x 11 and started to leverage the hardware resources in a more correct manner, we started to see massive differences in performance and quality vs 360/PS3 era games. Because for the longest time PC was stuck using only one thread feeding gpu's "draw calls"(everything the gpu needs to render a frame). Hence the quotes from devs stating before hand consoles were able to surpass PC's hardware equivalency.

Now once AMD introduced Mantle back in 2013 it introduced an API was the same level of control as consoles. Once Dirext x 12 and Vulkan API using the same level of control it was over. The last two generations of consoles have been using same API's or similar based. The only thing a consoles have now is it's only a single set or a few sets hardware where dev's only have to code for, being able to squeeze as much a possible.

Another thing is that many "consoler's" fanboys before xbox one and PS4 era had the hardest time coming to terms that optimization also includes using lower quality assets and settings to reach a set target of performance vs PC.

Yeah I remember being amazed at how Xbox with its 733MHz Pentium III (or was it Celeron?) CPU managed to run Fable so well when my PC with my Pentium IV at like 1GHz struggled...except I ran it maxed out at 1024x768 with WAY better graphics. Only noticed that after I played the Xbox version again. Realized it looked like dogshit compared to the PC one.

Avatar image for NoodleFighter
NoodleFighter

11796

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#27 NoodleFighter
Member since 2011 • 11796 Posts
@girlusocrazy said:

DLSS is a big deal but reconstruction has already been done on consoles too, even if they don't call it DLSS they'll be able to take advantage of the same type of tech. We'll have to see how accessible it is to developers and how they take advantage. There are some last gen comparisons out there. (DLSS has a clear advantage apart from the motion trail flaws). New reconstruction techniques are bound to improve things on the current gen.

Either way developers will probably end up targeting the consoles and building out from there including the PC version. Developers will probably end up pushing for fidelity at the cost of frame rates on console to match up with PC.

Reconstruction has been already done on consoles but it isn't as good as DLSS which not only offers a reconstructed image on par or better than native but at a fraction of the cost of performance. Checkerboard rendering is noticeably worse than native and does not provide the performance boost DLSS does which is ironic since Digital Foundry even said in the video comparing DLSS to PS4 Pro checkerboard rendering in Death Stranding that DLSS runs at a lower internal resolution but looks better than the PS4 Pro checkerboard rendering which runs at a higher internal resolution. Sony and Microsoft may be working on their own DLSS alternative but we don't know if it will give the same performance and visual benefit that DLSS does since it has dedicated hardware for it while AMDs solution requires using shader cores.

PC gaming is way more popular now than it was in past with major companies such as Take Two stating half their revenue from multiplatform titles comes from PC that is also likely a major reason why PC ports aren't as bad as they were previous gens. Devs/pubs expected low sales from PC so they didn't bother making proper PC versions of games. CD Projekt the same devs that stated they downgraded The Witcher 3 because they needed console sales prioritized PC over consoles with Cyberpunk 2077 and it massively paid off. Imagine devs will start using the Xbox Series S equivalent of PC hardware as the baseline once we are a couple years into this gen and cross gen games are done. Of course devs will push graphics over framerate since graphics is what sells but that will make the gap between PC and consoles more noticeable. Since they're all chasing 4k which is a really demanding resolution, this will give PC its moment to shine with DLSS especially on RTX GPUs that are on par with the PS5 and Xbox Series X.

@Juub1990 said:

Console optimization doesn’t seem to be much of a thing anymore. One advantage they got is that the games almost always run properly whereas on PC, the devs sometimes botch it. If both are given proper care though, consoles don’t seem to be significantly ahead with comparable hardware anymore.

Most games have been fairly consistent in their optimization treatment on all platforms this past generation. If a game runs like crap on PC, chances are it is most likely running bad on consoles too.

@jasonofa36 said:

Nvidia, for the love of all things good, ADD DLSS ON TARKOV. The game uses bad TAA and performs bad on an even fairly good system.

Red Dead Redemption 2 could also use DLSS. It would be perfect if that game added it. Not even an RTX 3080 can run it maxed out with a solid 60fps at 4k but with DLSS it could certainly do that!

Avatar image for jasonofa36
JasonOfA36

3725

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#28 JasonOfA36
Member since 2016 • 3725 Posts

@NoodleFighter: Yes, RDR 2. What a bad way to implement TAA on that game. DLSS could probably clean up that TAA clutter.

Avatar image for eoten
Eoten

8671

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#29 Eoten
Member since 2020 • 8671 Posts

@Juub1990 said:
@04dcarraher said:

Console optimization advantage started to go to the wayside back in 2011. Once Pc games/devs started to use Direct x 11 and started to leverage the hardware resources in a more correct manner, we started to see massive differences in performance and quality vs 360/PS3 era games. Because for the longest time PC was stuck using only one thread feeding gpu's "draw calls"(everything the gpu needs to render a frame). Hence the quotes from devs stating before hand consoles were able to surpass PC's hardware equivalency.

Now once AMD introduced Mantle back in 2013 it introduced an API was the same level of control as consoles. Once Dirext x 12 and Vulkan API using the same level of control it was over. The last two generations of consoles have been using same API's or similar based. The only thing a consoles have now is it's only a single set or a few sets hardware where dev's only have to code for, being able to squeeze as much a possible.

Another thing is that many "consoler's" fanboys before xbox one and PS4 era had the hardest time coming to terms that optimization also includes using lower quality assets and settings to reach a set target of performance vs PC.

Yeah I remember being amazed at how Xbox with its 733MHz Pentium III (or was it Celeron?) CPU managed to run Fable so well when my PC with my Pentium IV at like 1GHz struggled...except I ran it maxed out at 1024x768 with WAY better graphics. Only noticed that after I played the Xbox version again. Realized it looked like dogshit compared to the PC one.

Yup. I had Skyrim when it came out on Xbox 360, I was surprised at the thing but when I got it on PC, I realized the "optimized" console versions were actually low settings, and a lower frame rate. But now they use different gimmicks. Like "dynamic resolution" to advertise games using higher graphics settings and resolutions than they actually do. DLSS as well, which while it may be able to sharpen a lower resolution picture, it cannot actually increase the resolution itself. It cannot add details that aren't in the lower resolution image.

Avatar image for glez13
glez13

10310

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#30 glez13
Member since 2006 • 10310 Posts

One(or like last gen two) system will never be beaten in optimization by another one that can have hundreds of possible hardware configurations. PC will remain in the same place it has always been, brute forcing its way through optimization.

Avatar image for Pedro
Pedro

69479

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#31 Pedro
Member since 2002 • 69479 Posts

Console "optimization advantage" died when consoles started using PC hardware.

Avatar image for appariti0n
appariti0n

5013

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33 appariti0n
Member since 2009 • 5013 Posts

@eoten: I thought that's exactly what DLSS does, uses AI to predict what something would look like at higher resolution essentially?

Avatar image for firedrakes
firedrakes

4365

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#34 firedrakes
Member since 2004 • 4365 Posts

i dont have a issue with dlss and checker boarding tech.

i just hate the pr dept saying native 2k or 4k.. when the term does not mean what pr dept wants.

Avatar image for NoodleFighter
NoodleFighter

11796

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#35 NoodleFighter
Member since 2011 • 11796 Posts

@girlusocrazy: It could be a correlation that the developers simply aren't that good at coding in general which is why all versions of a game perform bad, plus it is the norm these days for games to launch broken and be patched later. I remember the Ark Survival Evolved devs saying that the PS4 Pro is equal to a $900 PC which turned out to be false and when you look at their game it is optimized horribly so it makes you question the validity of their statement even more.

Avatar image for miiiiv
miiiiv

943

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#36 miiiiv
Member since 2013 • 943 Posts

@NoodleFighter said:

With support for DLSS continuing to grow with adaption in major and minor titles console users can't really downplay it as something only a small handful of games will use. DLSS is a big game changer as it can even offer a 85% boost in framerate in COD Cold War and a 60% boost in Cyberpunk 2077. DLSS makes 4k 90+fps on high/max settings an actual possibility on PC. The PS5 and Xbox Series S|X don't have an answer to DLSS despite being capable of machine learning. The Xbox Series X machine learning performance is half that of an RTX 2060 it also doesn't help that the consoles don't have dedicated hardware for machine learning which means resources will be taken away from other task in order to do it so it won't get the same performance benefit as it would on an Nvidia RTX card which tensor cores for machine learning.

Is the machine learning performance really that low on the series X? I don't necessarily doubt it, but a source would be nice.

Avatar image for eoten
Eoten

8671

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#37  Edited By Eoten
Member since 2020 • 8671 Posts

@appariti0n said:

@eoten: I thought that's exactly what DLSS does, uses AI to predict what something would look like at higher resolution essentially?

How can it possibly show you details that do not or cannot exist in the lower resolution rendering DLSS uses? It can't. Do people want "predicted" 4K or actual 4K? All it does is take a lower resolution image, like 1080P, which for 95%+ of gamers aren't even going to see that much of a difference in to begin with, then sharpen it with additional anti aliasing.

So the first problem with DLSS is that most gamers just don't sit close enough to screens big enough to see a huge difference between 1080P and 4K to begin with (24" 1080P still being by far the most common display for gaming) and if you are someone with a large television that you will see a difference in the resolution, then there's just going to be some fine details in the native 4K resolution that will not exist with DLSS. Such as an enemy player off in the distance.

It's console makers and GPU manufacturers faking 4K so they can advertise 4K without ever actually delivering it. It simply makes 1080P look a little bit sharper.

Avatar image for NoodleFighter
NoodleFighter

11796

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#38  Edited By NoodleFighter
Member since 2011 • 11796 Posts

@miiiiv: Digital Foundry's video comparing DLSS to Checkerboard rendering in Death Stranding

Loading Video...

Avatar image for appariti0n
appariti0n

5013

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#39  Edited By appariti0n
Member since 2009 • 5013 Posts

@eoten:

How can it possibly show you details that do not or cannot exist in the lower resolution rendering DLSS uses? It can't.

By using AI, based on a learning algorithm that has been repeatedly fed both hi and lo res versions of the same image, allowing it to make predictions on what details should be there in a similar lo resolution image. The first time I encountered something like this, was actually the first Samsung 8K TV we had in the store I worked.

It's still not true 4K, but it's also not checkerboard rendering either. Different approaches. Checkerboard can only re-draw based on the existing half of the checkerboard, whereas DLSS should in theory be smart enough to recognize something like pavement, and infer that hey, there should be little cracks, maybe pebbles, etc. So yes, it CAN indeed add detail that was not there in the original image.

Edit: quoted wrong part, and used a wrong word. I need more coffee.

Avatar image for eoten
Eoten

8671

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#40  Edited By Eoten
Member since 2020 • 8671 Posts

@appariti0n said:

@eoten:

How can it possibly show you details that do not or cannot exist in the lower resolution rendering DLSS uses? It can't.

By using AI, based on a learning algorithm that has been repeatedly fed both hi and lo res versions of the same image, allowing it to make predictions on what details should be there in a similar lo resolution image. The first time I encountered something like this, was actually the first Samsung 8K TV we had in the store I worked.

It's still not true 4K, but it's also not checkerboard rendering either. Different approaches. Checkerboard can only re-draw based on the existing half of the checkerboard, whereas DLSS should in theory be smart enough to recognize something like pavement, and infer that hey, there should be little cracks, maybe pebbles, etc. So yes, it CAN indeed add detail that was not there in the original image.

Edit: quoted wrong part, and used a wrong word. I need more coffee.

I know what AI upscaling is. Like ray tracing, it's nothing new. It's just not used in real-time. AI upscaling of lower resolution textures will sharpen what is there, and may try to predict what could be there, but it cannot add random details that aren't there. It will make lower resolution textures appear sharper, but not more detailed, and if there are small details that would be apparent in a higher resolution, but not a lower resolution image, then the AI, not even knowing those details exist, won't include them.

DLSS is sharpened 1080P, or some other lower resolution and should not be considered 4K. For the tiny, and i mean TINY fringe of PC gamers that may try to game on a 4K screen, and are actually sitting close enough to a big enough screen to notice the smaller detail over 1440P, you're missing out using DLSS. And unless your desktop computer monitor is larger than 32 inches, you're not going to see any additional detail over 1440P. In fact, if DLSS uses a lower resolution than 1440P, then it is very likely that a native 1440P without DLSS will have more detail, and be able to sustain higher frame rates at the same time.

Likewise, let's stop pretending dynamic resolutions is anything more than delivering one resolution while advertising another. I also don't know a single PC gamer who wouldn't rather reduce graphics settings if necessary to maintain a clear picture at a stable frame rate, than to have their game go blurry on them whenever any kind of action occurs.

DLSS is not 4K. Dynamic resolution is not 4K. And the truth is neither console, nor the 3080 and 6800XT GPUs can deliver that at a decent, stable frame rate in next generation gaming.

Avatar image for appariti0n
appariti0n

5013

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#41 appariti0n
Member since 2009 • 5013 Posts

@eoten: I never said DLSS is 4K. I'm just saying it's a different method than checkerboard or simple "upscaling", and needs to stop being represented as such.

While this isn't happening right now, it is entirely within the realm of possibility for future version of DLSS to even add objects that aren't there, based on enough prior learning. IE: pebbles on a road, extra grass, extra hair, etc.

Avatar image for rzxv04
rzxv04

2578

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 10

#42 rzxv04
Member since 2018 • 2578 Posts

Baseline targets are still mostly consoles and that's why a lot of ultra settings have diminishing returns. It's relative per gen and consoles define the bigger upticks in common usage.

A game targeting baseline 1.3 tflop vs a game that truly targets a 12 tflop is gonna be visually more striking in vfx than a

game that oversharpens that 1.3 tflop using a 12 tflop PC gpu.

Don't know why a lot of major publishers and devs still tie their tech usages with consoles.

Do they simply not have confidence in the 2070+ (or rough equivalent of GPU power in pc) owners? Poor confidence in several years old SSD owners?

Hopefully, CP2077 gives pubs and devs more confidence in just making a game with baseline target for 3070 owners rather than the PS5.

Avatar image for xantufrog
xantufrog

17875

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#43  Edited By xantufrog  Moderator
Member since 2013 • 17875 Posts

@eoten: "AI upscaling" (DLSS) can indeed show details that "aren't there" because it is literally a generative model. We use this general principle a lot in neuroscience researching using deep learning. Given a sparse input - lacking many details (say, a 1080p render) - the algorithm completes out details from much higher fidelity renders that it has associated with that "sketch". Because neural networks can be very complex, this completion of details can be highly specific and context dependent. This is why DLSS has some artifacting, on the one hand - especially during fast motion the cues used to complete out the image are unreliable and can lead to blurring. On the other hand, this is why DLSS can outperform native rendering in more static situations in terms of detail (that depends on how high fidelity the training data were and how low-res the native rending "cues" are, though). We already have examples of both happening in games.

This is why I view it as an alternative rendering technique, more than simply upscaling.