Sorry tormentos, PS5 lost the GPU wars

Avatar image for hrt_rulz01
hrt_rulz01

20024

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#301 hrt_rulz01
Member since 2006 • 20024 Posts

@i_p_daily said:

@Uruz7laevatein: Which cow alt are you???

Was wondering the same thing... lol.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#302 BlackShirt20
Member since 2005 • 1642 Posts

@tormentos: I never once claimed PS5 had no hardware dedicated ray tracing.

https://youtu.be/NJ_mODupiDg

You will see Xbox Series X have 25-30% extra raw power and it’s likely the PS5 will locked at 1800P to match the performance of the Xbox Series X. This is just what they are claiming.

Avatar image for kuu2
kuu2

11271

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#303 kuu2
Member since 2005 • 11271 Posts

Wow, this thread is an amazing watch.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#304 Uruz7laevatein
Member since 2009 • 133 Posts

@BlackShirt20: Not really, resolution is mainly ROPs and memory bandwidth bound, the difference in TFLOPs mainly reflects turning the shaders/textures settings a slight notch in real world settings.

Avatar image for Gatygun
Gatygun

1968

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#305  Edited By Gatygun
Member since 2010 • 1968 Posts

@ronvalencia said:

@tormentos:

On PS5 clock speed issue,

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

Mark Cerny freely admits that CPU and GPU won't always be running at 3.5GHz and 2.23GHz respectively.

"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," he explains. "All things considered, the change to a variable frequency approach will show significant gains for PlayStation gamers."

Cerny is revealing the non-linear power curve for RDNA 2.

For PS5, Sony didn't budget for TDP with max clock speeds for both CPU and GPU since Sony assumes CPU wouldn't be fully max'ed e.g. do you need max CPU usage for a single-player story that doubles as an interactive movie?

Do you need AVX2's gather instructions for CPU software 3D rendering? There's 896 GFLOPS FP32 from the CPU.

Microsoft budgeted XSX's TDP like a gaming PC i.e. max everything e.g. heavy CPU large scale RTS game with max GPU usage.

Microsoft is realistic, if you talk about big open worlds with lots of data cpu performance is required. on a 9900k at 5,1ghz i sit at 61 fps dips with my CPU on 1080p. Go tell me how that CPU isn't going to be maxed out straight out of the gate in games like that.

Sonys whole argument is everywhere, we need beautiful big open world games that load really fast. yet we underclock our hardware when such features are pushed. That 10,3 tflop gpu aint pushing 10,3 tflops even remotely when games come out that require the CPU cores for 60 fps gaming and yes those cores will be pushed to its limits because clock speed means everything. It's also not like they can clock individiual cores down or say lets disable smt because loads will increase drastically again when that 5,5gb's needs to be rendered through your CPU. and no there custom i/o aint going to handle that on its own even remotely unless they got another ryzen 8 core cpu in that box.

It's a bad designed box that was overclocked for the sake of trying to steal some thunder from xbox series x which walks in circles around it.

Micrsoft understands what people need and that's what they pushed.

PS5 is pretty much the xbox one where the xbox series X is the PS4 at this point.

Also good luck letting devs develop for your SSD that nobody else uses. See how well that worked for them in the cell area.

There idiotic chase for there idiotic SSD speed is most likely going to cost them the multiplatform games which the majority of those console gamers are.

Avatar image for tormentos
tormentos

30905

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#306 tormentos
Member since 2003 • 30905 Posts

@Gatygun said:

Microsoft is realistic, if you talk about big open worlds with lots of data cpu performance is required. on a 9900k at 5,1ghz i sit at 61 fps dips with my CPU on 1080p. Go tell me how that CPU isn't going to be maxed out straight out of the gate in games like that.

Sonys whole argument is everywhere, we need beautiful big open world games that load really fast. yet we underclock our hardware when such features are pushed. That 10,3 tflop gpu aint pushing 10,3 tflops even remotely when games come out that require the CPU cores for 60 fps gaming and yes those cores will be pushed to its limits because clock speed means everything. It's also not like they can clock individiual cores down or say lets disable smt because loads will increase drastically again when that 5,5gb's needs to be rendered through your CPU. and no there custom i/o aint going to handle that on its own even remotely unless they got another ryzen 8 core cpu in that box.

It's a bad designed box that was overclocked for the sake of trying to steal some thunder from xbox series x which walks in circles around it.

Micrsoft understands what people need and that's what they pushed.

PS5 is pretty much the xbox one where the xbox series X is the PS4 at this point.

Also good luck letting devs develop for your SSD that nobody else uses. See how well that worked for them in the cell area.

There idiotic chase for there idiotic SSD speed is most likely going to cost them the multiplatform games which the majority of those console gamers are.

Is the misundertanding of some people the most lol worthy thing here.

That is because at 1080p the hit on CPU is higher the PS5 or series X don't target 1080p,second stop comparing PC where problems are sold by bruteforcing them,rather than clever programing,we have some impressively big games like RDR2 which were make to run on a damn Jaguar at 1.6ghz with not even 8 cores because some were reserve,we are talking about the lastest gen ryzen 3.5ghz 8 cores 16 threads man,i don't see any problem on the CPU side at all,hell sony uses hardware to offload not only the incredible fast SSD but also audio.

Is ovious that you don't know shit about this,that GPU will run at 2.23ghz when the CPU requires a little more power,the GPU lower its power in the worse case by 10%,but that doesn't mean 10% less performance because the hit to the 2.23ghz frequency is say to be 3% or something like that.

Nothing that can't be made by some adjustment on a detail of a mountain that is far away from you,not only that the SSD on sony side may save the GPU some time was well,because the GPU is always rendering a certain cone of vision which is why when ever your turn a section is there,if you rotate left things to the right behind you stop been render,while the GPU continue to render to the left,this is why you get some times pop in when your storage is not fast enough,the one on PS5 can fill those 16GB of ram in 2 second that fast that shit is,so rather than having the PS5 GPU working over time to render a cone,that cone can be reduce because the ssd is feeding much much faster.

Now that is some horse shit crap,what do you poeople get from inventing this kind of things really?

The same shit happen when the xbox one was weaker and you people coult not stop inventing magic API and cloud and every shit you could find to claim the gap would be small or non existant the worse offender claim the xbox one would even beat the PS4.

The PS5 was design to run at 2.23ghz it wasn't a last minute upclock you can't freaking OC a GPU to a speed that no other comercially release on a console or PC has been release,and expect it to work.

In fact the PS5 SOC first leak claimed 2.0ghz on early testing,the cooling solution they most have most be freaking out of this world and superior by far to the xbox series X one.

and no there custom i/o aint going to handle that on its own even remotely unless they got another ryzen 8 core cpu in that box.

I find this specially funny,is like you people don't want to read,the information is out why would you not read it? Are you people afreaid that in your mind the xbox will loose value if it doesn't obliterate the PS5?

"By the way, in terms of performance, that custom decompressor equates to nine of our Zen 2 cores, that's what it would take to decompress the Kraken stream with a conventional CPU," Cerny reveals.

A dedicated DMA controller (equivalent to one or two Zen 2 cores in performance terms) directs data to where it needs to be, while two dedicated, custom processors handle I/O and memory mapping. On top of that, coherency engines operate as housekeepers of sorts.

That is funny you say no i/o is going to handle that unless another 8 core ryzen is added,well sony has hardware to offload those jobs equivalent to 9 zen 2 cores.

So how you don't know this? in fact in DF article they explain it very well.

Yeah sony is not good doing anything,MS will destroy sony with the mighty 12TF machine that will render games in 16K 240FPS while the PS5 will drops its GPU power to 1.3TF and be running games in 720p 25FPS.

I have to save all this threads they will be gold in some months.

Avatar image for Juub1990
Juub1990

9651

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#307 Juub1990  Online
Member since 2013 • 9651 Posts

@tormentos said:

Yeah sony is not good doing anything,MS will destroy sony with the mighty 12TF machine that will render games in 16K 240FPS while the PS5 will drops its GPU power to 1.3TF and be running games in 720p 25FPS.

Way to go to exaggerate. The PS5 will do at least 30fps at 900p.

Avatar image for tormentos
tormentos

30905

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#308 tormentos
Member since 2003 • 30905 Posts

@Juub1990 said:
@tormentos said:

Yeah sony is not good doing anything,MS will destroy sony with the mighty 12TF machine that will render games in 16K 240FPS while the PS5 will drops its GPU power to 1.3TF and be running games in 720p 25FPS.

Way to go to exaggerate. The PS5 will do at least 30fps at 900p.

My bad you know how i get,your right.

Avatar image for Gatygun
Gatygun

1968

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#309 Gatygun
Member since 2010 • 1968 Posts

@tormentos said:
@Gatygun said:

Microsoft is realistic, if you talk about big open worlds with lots of data cpu performance is required. on a 9900k at 5,1ghz i sit at 61 fps dips with my CPU on 1080p. Go tell me how that CPU isn't going to be maxed out straight out of the gate in games like that.

Sonys whole argument is everywhere, we need beautiful big open world games that load really fast. yet we underclock our hardware when such features are pushed. That 10,3 tflop gpu aint pushing 10,3 tflops even remotely when games come out that require the CPU cores for 60 fps gaming and yes those cores will be pushed to its limits because clock speed means everything. It's also not like they can clock individiual cores down or say lets disable smt because loads will increase drastically again when that 5,5gb's needs to be rendered through your CPU. and no there custom i/o aint going to handle that on its own even remotely unless they got another ryzen 8 core cpu in that box.

It's a bad designed box that was overclocked for the sake of trying to steal some thunder from xbox series x which walks in circles around it.

Micrsoft understands what people need and that's what they pushed.

PS5 is pretty much the xbox one where the xbox series X is the PS4 at this point.

Also good luck letting devs develop for your SSD that nobody else uses. See how well that worked for them in the cell area.

There idiotic chase for there idiotic SSD speed is most likely going to cost them the multiplatform games which the majority of those console gamers are.

Is the misundertanding of some people the most lol worthy thing here.

That is because at 1080p the hit on CPU is higher the PS5 or series X don't target 1080p,second stop comparing PC where problems are sold by bruteforcing them,rather than clever programing,we have some impressively big games like RDR2 which were make to run on a damn Jaguar at 1.6ghz with not even 8 cores because some were reserve,we are talking about the lastest gen ryzen 3.5ghz 8 cores 16 threads man,i don't see any problem on the CPU side at all,hell sony uses hardware to offload not only the incredible fast SSD but also audio.

Is ovious that you don't know shit about this,that GPU will run at 2.23ghz when the CPU requires a little more power,the GPU lower its power in the worse case by 10%,but that doesn't mean 10% less performance because the hit to the 2.23ghz frequency is say to be 3% or something like that.

Nothing that can't be made by some adjustment on a detail of a mountain that is far away from you,not only that the SSD on sony side may save the GPU some time was well,because the GPU is always rendering a certain cone of vision which is why when ever your turn a section is there,if you rotate left things to the right behind you stop been render,while the GPU continue to render to the left,this is why you get some times pop in when your storage is not fast enough,the one on PS5 can fill those 16GB of ram in 2 second that fast that shit is,so rather than having the PS5 GPU working over time to render a cone,that cone can be reduce because the ssd is feeding much much faster.

Now that is some horse shit crap,what do you poeople get from inventing this kind of things really?

The same shit happen when the xbox one was weaker and you people coult not stop inventing magic API and cloud and every shit you could find to claim the gap would be small or non existant the worse offender claim the xbox one would even beat the PS4.

The PS5 was design to run at 2.23ghz it wasn't a last minute upclock you can't freaking OC a GPU to a speed that no other comercially release on a console or PC has been release,and expect it to work.

In fact the PS5 SOC first leak claimed 2.0ghz on early testing,the cooling solution they most have most be freaking out of this world and superior by far to the xbox series X one.

and no there custom i/o aint going to handle that on its own even remotely unless they got another ryzen 8 core cpu in that box.

I find this specially funny,is like you people don't want to read,the information is out why would you not read it? Are you people afreaid that in your mind the xbox will loose value if it doesn't obliterate the PS5?

"By the way, in terms of performance, that custom decompressor equates to nine of our Zen 2 cores, that's what it would take to decompress the Kraken stream with a conventional CPU," Cerny reveals.

A dedicated DMA controller (equivalent to one or two Zen 2 cores in performance terms) directs data to where it needs to be, while two dedicated, custom processors handle I/O and memory mapping. On top of that, coherency engines operate as housekeepers of sorts.

That is funny you say no i/o is going to handle that unless another 8 core ryzen is added,well sony has hardware to offload those jobs equivalent to 9 zen 2 cores.

So how you don't know this? in fact in DF article they explain it very well.

Yeah sony is not good doing anything,MS will destroy sony with the mighty 12TF machine that will render games in 16K 240FPS while the PS5 will drops its GPU power to 1.3TF and be running games in 720p 25FPS.

I have to save all this threads they will be gold in some months.

1) Ah u still believe in clever programming the good old "coding to the metal"

U do realize those boxes run api's right? why do you think xbox can port there games over without effort towards the xbox series x on day one? because it runs a windows core. Why do you think u think 3,5gb of ram on the PS4 is reserved and 1-2 cores? because it runs a api on the background with all kinds of checks etc etc. Ever heard about vulkan / dx12? that nukes overhead on PC down to a minimum?

This aint ps2/ps3 area anymore mate, get in contact with the times.

2) i5 760 scores the same score actually lower than the PS4 cpu r15 u can see here what that results into.

Loading Video...

Console "coding to the metal" has been debunked now for ages because we are not living in the PS2/PS3 area anymore. So there you go again.

3) About your 1080p rant, if a game bottlenecks a GPU because CPU is to slow u can see that easily by lowering the resolution as gpu isn't becoming the bottleneck at that point. That's why u look at lower resolutions to see what the CPU can do

If at 1080p the CPU isn't pushing 60 fps in a game, it will not hit it at 4k even if you got 3000 times faster GPU.

4) If you think CPU clock frequency doesn't matter, u should probably tell all those ryzen and intel fokes to stop overclocking there CPU's or even buy newer ryzens because hey man high clocks mean nothing.

Even if only 1 core gets used on that PS5, and 7 sit idle, clock frequency matters A TON to not bottleneck games that require CPU performance which practically any game is specially new generation open world games. That's why xbox tells about higher clocks without smt. And with sony's faster SSD, that CPU will be ramming all its 8 cores in loading area's.

Also multicore coding the first core is always the most taxed no matter what you do. Its a limitation we have and that won't go away any time soon. So the more data on the all the cores the high clocks will benefit.

I got a 9900k at 5,1ghz, that thing bottlenecks hard in 50% games that i play even while its only 50% used. Imagine that. That CPU will be pegged to its max clocks in any open world game that's multiplatform if they care about 60 fps. And if that happens the GPU will dive under its 10,3tflops as cerny explained. xbox? nope stays exactly where its at.

5) Your 3% GPU performance metric is also interesting, why go through the trouble slamming that gpu core to 2,2+ghz and let it drop when it only is 3% performance it differences? why bother? Why even bother with energy saving when u clearly don't give a shit about energy consumption when u slam a gpu core to 2,2+ghz. Why not just say well we lock it at 2,2ghz guys and done.

That's why cerny got laughed and ridiculed on the internet until sony brake NDA wiht there first party cheerleaders that had to back him up. Hell even his tempest story about 100 channels that nothing else features is exactly how atmos works for example. Nice story bro, could just have opted for that.

6) Poppins in your screen happen, when the data can't be loaded in fast enough. This can be all kinds of reasons. memory bottleneck / cpu not holding up / harddrive to slow in feeding information. So yes cerny is right on this front that a SSD improves this dramatically over a normal HDD everybody with a PC could have told you that in the last 5 years.

SSD is just a source that feeds the other components with information that information still needs to get rendered and flushed out and a SSD will not make a impact on that because a SSD cannot render stuff, and it cannot function as RAM even remotely. It feeds the ram.

Your whole 1-2 second fill in ram idea is flawed by the fact that if you turn to the left and then to the right how is a SSD going ot provide your ram of information when it takes 2 seconds to provide it? the poppins will be far more massive than ever before. This is why u have ram with insane access speed. which SSD no matter microsoft sony or PC will never be able to reach.

7) SSD is needed, its not a option for any of those consoles. Its absolutely needed with the next generation hardware. The sata 2 controller on the PS4 was pure criminal and holded even that box back.

The only thing exactly what cerny talks about SSD's will do is safe disc space and safe memory space to some degree. This will not be different from xbox and PS4 even remotetly because the raw throughput or compressed throughput is not the limitation, its the access speed.

8) I never claimed the xbox one would beat anything, i never liked its hardware and always considered it a complete dud when it launched. There was absolute no reason to not buy a PS4 for console users.

9) U clearly don't know much about how GPU's are designed and pushed on the market.

Overclocking GPU's to the brink of death because they don't compete well with competitor products is something AMD and even Nvidia excels at.

And for consoles that's no different, even the original xbox one got a clock boost on its GPU after the bad reception of the box. Even while everything was already set in stone.

A chip gets baked, tested and they provide you a range of energy consumption + heat output + yields + clock speeds etc. And u choose. It's clear sony wanted more on a later date out of its chip or else they would have gotten another chip like microsoft has and decided it wasn't holding up much like what microsoft had with the xbox one and decided to clock it up higher at the stake of energy / heat problems. Or else it wouldn't be variable. Because there is otherwise no reason to even clock it anywhere near it.

So yea you are completely wrong on this part.

10) about his chip, i wanna see that thing in action then. Because i heavily doubt he found a way to create a chip that does just that. Its probably the same stuff as microsoft got with its own platform forgot the name for it for a second, which only does extreme limited amount of select data and that's what xbox series X one will also feature.

Interesting to see how sony is going to go around this, however it doesn't change the limitations that SSD form over Ram.

Avatar image for Sagemode87
Sagemode87

1455

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#310 Sagemode87
Member since 2013 • 1455 Posts

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

Avatar image for cainetao11
cainetao11

37149

Forum Posts

0

Wiki Points

0

Followers

Reviews: 77

User Lists: 1

#311 cainetao11
Member since 2006 • 37149 Posts

@slimdogmilionar said:

This gen is starting out bad for Sony fanboys, not only did they lose the power war but Sony also admitted defeat and switched PSN over to MS servers.

Avatar image for cainetao11
cainetao11

37149

Forum Posts

0

Wiki Points

0

Followers

Reviews: 77

User Lists: 1

#312 cainetao11
Member since 2006 • 37149 Posts

@getyeryayasout said:

System Wars, not fanboy wars. Mods please lock for calling out System Wars' king.

Nope. Guy has made a point of equating the consoles and they aint equal. HAHAHAHAHAHHAAHAHAHHAHAHAHHAHAHAHAHHAHAHAHAHHAHAHAHHAHA

Avatar image for getyeryayasout
getyeryayasout

13089

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#313 getyeryayasout  Online
Member since 2005 • 13089 Posts

@cainetao11: Boom, reported for excessive HAHA's.

Avatar image for tormentos
tormentos

30905

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#314  Edited By tormentos
Member since 2003 • 30905 Posts

@Gatygun said:

1) Ah u still believe in clever programming the good old "coding to the metal"

U do realize those boxes run api's right? why do you think xbox can port there games over without effort towards the xbox series x on day one? because it runs a windows core. Why do you think u think 3,5gb of ram on the PS4 is reserved and 1-2 cores? because it runs a api on the background with all kinds of checks etc etc. Ever heard about vulkan / dx12? that nukes overhead on PC down to a minimum?

This aint ps2/ps3 area anymore mate, get in contact with the times.

2) i5 760 scores the same score actually lower than the PS4 cpu r15 u can see here what that results into.

Console "coding to the metal" has been debunked now for ages because we are not living in the PS2/PS3 area anymore. So there you go again.

Ok this is the only thing i will address because your post from the start show you KNOW shit about this.

Consoles don't run on API they run on OS,the xbox uses a windows varian,the PS4 uses Freebsd,in the case of the xbox those games run on a virtual machine which is why it is easier for MS,on sony side is more complicated because sony used low level tools to make those games,and there is less abstraction.

You can't run a 1.6ghz jaguar on PC and run the games the PS4 runs you simply can't.

API are tools to make games not OS,DX12 is MS API,GNM is sony's api the console don't run on those,those are tools to make games.

And a i5 is much better CPU than a jaguar.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#315  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@Gatygun: LOL , So what you are saying is an 8-core Zen2 is a now a bottleneck or whatever that means when a console has it? Holy phook, that's news to me ( I mean do people even know what a bottleneck is). The worst case scenario in gaming for me was when my Ryzen-3700X (before I swapped it out for an Ryzen 9-3950X)hits 20% usage at worse on unoptimized PC games(where overhead is much higher). The only time a CPU pegged hard is when the FPS is uncapped beyond 60 FPS (when GPU is not a bottleneck), or running video-encoding/compilation-software/etc.

Avatar image for i_p_daily
I_P_Daily

16140

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#316 I_P_Daily
Member since 2015 • 16140 Posts

@Sagemode87 said:

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

And some of you cows said that R* would gimp the X version of RDR2 because Sony had marketing rights, how well did that work out for you? something like this lol.

Oh and lets forget all the other games that took advantage of the X over the PRO, even though Sony has had the biggest userbase this gen.

Better think up a new excuse, this one has been debunked LOL.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#317 ronvalencia
Member since 2008 • 29612 Posts

@Sagemode87 said:

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

FYI, DirectX 12 Ultimate API will unify the common library on both Windows 10 version 2004 computers and on the Xbox Series X.

Look in Windows 10 version 1909's device manager for "Xvd". It's Xbox Virtual Disk already running on Windows 10 PC.

Windows 10 version 1909's xvd runs the same XBO storage infrastructure.

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#318 Pedro  Online
Member since 2002 • 39100 Posts

@tormentos said:

in the case of the xbox those games run on a virtual machine which is why it is easier for MS,

Where are you getting this information from? Games running on a virtual machine? WHAT?

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#319 Pedro  Online
Member since 2002 • 39100 Posts
@Sagemode87 said:

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

You mean all those games that look and run better on the Xbox One X which has the smallest userbase occurred by chance?

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#320  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@Pedro: iirc "Xbox" isn't actually the physical hardware itself but an GameOS/VM that runs an abstraction layer that runs above the main very stripped down Windows OS/Hyper-visor (which is virtually invisible to the end user) that is on-top of all the optimized low level drivers on the so called "Xbox" console. Which has provided CPU overhead on the so called Xbox consoles (hence the power of DX12 or PowahOfDaCloud). MS vision of the Xbox was to push Windows across multimedia platforms from the very getgo since the Sega Dreamcast (it was the spiritual successor to the DC, whereas the PlayStation was the spiritual successor of the SNES(and not the N64)). This is why MS is able to transition from the XBO to the X1X despite different hardware configuration and different memory architectures almost seamlessly.

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#321  Edited By Pedro  Online
Member since 2002 • 39100 Posts

@Uruz7laevatein said:

@Pedro: iirc Xbox isn't actually the physical hardware itself but an GameOS/VM that runs an abstraction layer that runs above the main very stripped down Windows OS/Hypervisor that is ontop of all the optimized low level drivers. Which has provided CPU overhead on the so called Xbox consoles (hence the power of DX12 or PowahOfDaCloud). MS vision of the Xbox was to push Windows across multimedia platforms from the very getgo since the Sega Dreamcast (it was the spiritual successor to the DC, whereas the PlayStation was the spiritual successor of the SNES(and not the N64)).

What? You are conflating multiple varying ideas into one big mess of, I am not even sure what.

@Gatygunsaid:...And with sony's faster SSD, that CPU will be ramming all its 8 cores in loading area's.

That is not correct. Data can be loaded into memory without the CPU.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#322 ronvalencia
Member since 2008 • 29612 Posts

@Pedro said:
@Uruz7laevatein said:

@Pedro: iirc Xbox isn't actually the physical hardware itself but an GameOS/VM that runs an abstraction layer that runs above the main very stripped down Windows OS/Hypervisor that is ontop of all the optimized low level drivers. Which has provided CPU overhead on the so called Xbox consoles (hence the power of DX12 or PowahOfDaCloud). MS vision of the Xbox was to push Windows across multimedia platforms from the very getgo since the Sega Dreamcast (it was the spiritual successor to the DC, whereas the PlayStation was the spiritual successor of the SNES(and not the N64)).

What? You are conflating multiple varying ideas into one big mess of, I am not even sure what.

@Gatygunsaid:...And with sony's faster SSD, that CPU will be ramming all its 8 cores in loading area's.

That is not correct. Data can be loaded into memory without the CPU.

@Gatygun

It's called DMA... sigh.. CPU govern data load is PIO mode.. remember PIO mode?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#323  Edited By ronvalencia
Member since 2008 • 29612 Posts
@Uruz7laevatein said:

@Pedro: iirc "Xbox" isn't actually the physical hardware itself but an GameOS/VM that runs an abstraction layer that runs above the main very stripped down Windows OS/Hyper-visor (which is virtually invisible to the end user) that is on-top of all the optimized low level drivers on the so called "Xbox" console. Which has provided CPU overhead on the so called Xbox consoles (hence the power of DX12 or PowahOfDaCloud). MS vision of the Xbox was to push Windows across multimedia platforms from the very getgo since the Sega Dreamcast (it was the spiritual successor to the DC, whereas the PlayStation was the spiritual successor of the SNES(and not the N64)). This is why MS is able to transition from the XBO to the X1X despite different hardware configuration and different memory architectures almost seamlessly.

Nope, Xbox Windows NT vision started in the early 1990s. This is after MSX3 has ended around 1993.

From https://en.wikipedia.org/wiki/Amiga_Hombre_chipset

The original plan for the Hombre-based computer system was to have Windows NT compatibility, with native AmigaOS recompiled for the new big-endian CPU to run legacy 68k Amiga software through emulation. Commodore chose the PA-7150 microprocessor over the MIPSR3000 microprocessor and first generation embedded PowerPC microprocessors, mainly because these low-cost microprocessors were unqualified to run Windows NT. This wasn't the case for the 64-bitMIPSR4200, but it was rejected for its high price at the time.

----

Commodore selected HP PA-7150 RISC CPU since it's qualified to run Windows NT LOL. If Windows NT wasn't a consideration, MIPS R3000 or first-generation embedded PowerPC would do the job.

Commodore's key Amiga engineers created 3DO which is PowerPC based system. Commodore almost selected MIPS R3000 which is also included in Sony's original PlayStation.

Thanks to US "greenie" EPA, Commodore went bust.

Sony was involved with canceled MSX3 for 1990 release. Read https://en.wikipedia.org/wiki/MSX

MSX standard was created by MS and ASCII which run on MSX-DOS and MSX Basic.

After MSX, Sony drifted to Nintendo and later created PlayStation.

MS keeps losing hardware platform partners, hence Xbox was created.

MSX's Z80 CPU is a fork from Intel 8080 LOL and Sony game console selected X86 (8086) which is a spiritual successor to Intel 8080.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#324  Edited By ronvalencia
Member since 2008 • 29612 Posts
@Pedro said:
@tormentos said:

in the case of the xbox those games run on a virtual machine which is why it is easier for MS,

Where are you getting this information from? Games running on a virtual machine? WHAT?

Read https://en.wikipedia.org/wiki/Xbox_One_system_software

The Xbox One console runs on an operating system that includes the Windows 10 core, although initially it included the Windows 8 core at the Xbox One's release. The Xbox One system software contains a heavily modified Hyper-V hypervisor (known as NanoVisor) as its host OS and two partitions. One of the partitions, the "Exclusive" partition is a custom virtual machine (VM) for games; the other partition, the "Shared" partition is a custom VM for running multiple apps. The Shared Partition contained the Windows 8 Core at launch until November 2015, where via a system update known as the "New Xbox One Experience", it was upgraded to the Windows 10 Core. With Windows 10, Universal Windows Platform apps became available on Xbox One. According to the current head of Microsoft's Gaming division, Phil Spencer, "The importance of entertainment and games to the Windows ecosystem has become really prevalent to the company".[10] The program that Microsoft launched allows developers to build a single app that can run on a wide variety of devices, including personal computers and Xbox One video game consoles.[11] According to Polygon, Microsoft removed the distinction between Xbox One and Windows PC.[10]

----------------------------------------------

"DirectX12 Ultimate" is the primary API for Xbox Series X and Windows 10 version 2004.

XBOX/X1X DirectX12.X and PC DirectX12.0/12.1 differences have been removed.

With a desktop-class CPU, Xbox Series X is effectively a locked-down Windows 10 gaming PC.

DirectX12's DirectML Meta-command is the official API to hit-the-near-metal on both PC and XSX.

Avatar image for Xabiss
Xabiss

3493

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#325 Xabiss
Member since 2012 • 3493 Posts

@Pedro said:
@tormentos said:

in the case of the xbox those games run on a virtual machine which is why it is easier for MS,

Where are you getting this information from? Games running on a virtual machine? WHAT?

Well he didn't explain it very well like usual, but here is a very well written article about how it works:

https://www.giantbomb.com/forums/xbox-one-8450/why-does-the-xbox-one-have-a-hypervisor-and-what-i-1437760/

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#326  Edited By ronvalencia
Member since 2008 • 29612 Posts

@davillain- said:

Cerny did say TF doesn't matter, it's how you use it ;)

But overall, the specs aren't bad, to say the least, the price and what launch exclusive games are still the most important part. Honestly again, I am just happy that they announced that the storage will still be user-upgradable and will not be proprietary. That's a big deal to the consumers.

Reminder, AMD claims NAVI is designed with scalability.

AMD vs Cerny?

IF AMD can't scale NAVI's raster power against NVIDIA's multi-level Turing SKUs, then AMD is dead as a GPU manufacturer!

AMD's RTG can't afford another Vega raster scalability debacle. AMD's CEO Lisa Su should be careful with her statements on being competitive against the competition. If She fails, it's grounds for legal action from AMD's shareholders.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#327 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@Gatygun said:

1) Ah u still believe in clever programming the good old "coding to the metal"

U do realize those boxes run api's right? why do you think xbox can port there games over without effort towards the xbox series x on day one? because it runs a windows core. Why do you think u think 3,5gb of ram on the PS4 is reserved and 1-2 cores? because it runs a api on the background with all kinds of checks etc etc. Ever heard about vulkan / dx12? that nukes overhead on PC down to a minimum?

This aint ps2/ps3 area anymore mate, get in contact with the times.

2) i5 760 scores the same score actually lower than the PS4 cpu r15 u can see here what that results into.

Console "coding to the metal" has been debunked now for ages because we are not living in the PS2/PS3 area anymore. So there you go again.

Ok this is the only thing i will address because your post from the start show you KNOW shit about this.

Consoles don't run on API they run on OS,the xbox uses a windows varian,the PS4 uses Freebsd,in the case of the xbox those games run on a virtual machine which is why it is easier for MS,on sony side is more complicated because sony used low level tools to make those games,and there is less abstraction.

You can't run a 1.6ghz jaguar on PC and run the games the PS4 runs you simply can't.

API are tools to make games not OS,DX12 is MS API,GNM is sony's api the console don't run on those,those are tools to make games.

And a i5 is much better CPU than a jaguar.

Running Linux with Steam on PS4 is slow regardless of recognized Radeon GPU Liverpool driver. Not desirable hardware for mass jailbreaking.

PS5 hardware specs can run desktop PC Linux OS with Steam with ease i.e equivalent to Ryzen 7 3700X with 3.5 Ghz CPU, RX-6700 GPU (RDNA 2 RX 5700 replacement), 16GB memory and PCI-E 4.0 SSD.

Both XSX and PS5 are desirable hardware for jailbreaking.

-----------

RDNA 2 GPU has the following features from DirectX12 Ultimate (aka DirectX 12_2) and Sony may have their own version (Sony is less transparent with their public PS5 plans)

  • Ray Tracing (DXR 1.1), higher efficient BVH RT API. RDNA 2's RT cores are BVH search tree + intersection branch test baked in. Ignore the AMD's old RT patent i.e. RDNA 2's RT cores runs concurrently with shaders.

  • Sampler Feedback, recycle shaded 3D surfaces for the next frame. This is shader resource conservation and it's better than checkerboard.
  • Variable Rate Shading (Tier 2), change shading resolution while keep geometry resolution native. This is shader resource conservation and it's better than checkerboard.
  • Mesh Shaders, improve geometry processing. Geometry and shader resource conservation.

Sampler Feedback and Variable Rate Shading (Tier 2) are very useful hardware features for conserving shader resources.

Avatar image for tormentos
tormentos

30905

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#328 tormentos
Member since 2003 • 30905 Posts

@Pedro said:
@tormentos said:

in the case of the xbox those games run on a virtual machine which is why it is easier for MS,

Where are you getting this information from? Games running on a virtual machine? WHAT?

This is old man i was going to quote the data but i see other posters already did.

@i_p_daily said:
@Sagemode87 said:

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

And some of you cows said that R* would gimp the X version of RDR2 because Sony had marketing rights, how well did that work out for you? something like this lol.

Oh and lets forget all the other games that took advantage of the X over the PRO, even though Sony has had the biggest userbase this gen.

Better think up a new excuse, this one has been debunked LOL.

I think you mixed the images,the first one is the PS4 and the second is the xbox one which is the worse by far version and the version which the GREAT GREAT majority of xbox owners played.

Just because the xbox one X exist that doesn't erase that the majority of xbox one owners by a LONG shot are S and original model owners no X model.

Fact is most xbox one owners get an inferior experience those who get a better one by far on PS side outnumber those on the xbox side.

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#329 Pedro  Online
Member since 2002 • 39100 Posts

@Uruz7laevatein: Take a look at the responses to my take on your explanation. That is how you communicate ideas.

Avatar image for 04dcarraher
04dcarraher

23422

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#330  Edited By 04dcarraher
Member since 2004 • 23422 Posts

@tormentos said:
@Gatygun said:

1) Ah u still believe in clever programming the good old "coding to the metal"

U do realize those boxes run api's right? why do you think xbox can port there games over without effort towards the xbox series x on day one? because it runs a windows core. Why do you think u think 3,5gb of ram on the PS4 is reserved and 1-2 cores? because it runs a api on the background with all kinds of checks etc etc. Ever heard about vulkan / dx12? that nukes overhead on PC down to a minimum?

This aint ps2/ps3 area anymore mate, get in contact with the times.

2) i5 760 scores the same score actually lower than the PS4 cpu r15 u can see here what that results into.

Console "coding to the metal" has been debunked now for ages because we are not living in the PS2/PS3 area anymore. So there you go again.

Ok this is the only thing i will address because your post from the start show you KNOW shit about this.

Consoles don't run on API they run on OS,the xbox uses a windows varian,the PS4 uses Freebsd,in the case of the xbox those games run on a virtual machine which is why it is easier for MS,on sony side is more complicated because sony used low level tools to make those games,and there is less abstraction.

You can't run a 1.6ghz jaguar on PC and run the games the PS4 runs you simply can't.

API are tools to make games not OS,DX12 is MS API,GNM is sony's api the console don't run on those,those are tools to make games.

And a i5 is much better CPU than a jaguar.

Consoles do run API's and their not just "tools" to make games.

Their ran through the OS. They allow programs and software to interact with one another and share rules, settings, specs . Those APIs interact with libraries with the operating systems to specify how software should interact with the hardware. Each game uses a different engine and different sets of rules to ask the OS to relay what the gpu/cpu should do. Because of the differences between coding abilities of dev's, different engines, etc etc.You need some universal way to tell the GPU what to do, and some way to do all that with what the CPU is doing.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#331 BlackShirt20
Member since 2005 • 1642 Posts

@tormentos: “ Just because the xbox one X exist that doesn't erase that the majority of xbox one owners by a LONG shot are S and original model owners no X model.

Fact is most xbox one owners get an inferior experience those who get a better one by far on PS side outnumber those on the xbox side.”

This is true. And now, you along with all future PS5 owners will receive an inferior gaming experience all next generation. That is also a fact. Amirite?

Avatar image for Gatygun
Gatygun

1968

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#332  Edited By Gatygun
Member since 2010 • 1968 Posts

@Uruz7laevatein said:

@Gatygun: LOL , So what you are saying is an 8-core Zen2 is a now a bottleneck or whatever that means when a console has it? Holy phook, that's news to me ( I mean do people even know what a bottleneck is). The worst case scenario in gaming for me was when my Ryzen-3700X (before I swapped it out for an Ryzen 9-3950X)hits 20% usage at worse on unoptimized PC games(where overhead is much higher). The only time a CPU pegged hard is when the FPS is uncapped beyond 60 FPS (when GPU is not a bottleneck), or running video-encoding/compilation-software/etc.

Look if you don't know how technology works pls don't react.

Every CPU is bottlenecked and every CPU will always be bottlenecked because they have fixed performance. Go play anno 1800 on your 3700x and 3950x and compare them, your fps stays exactly the same as the same clocks.

Now clock that 3950x below the 3700x and see your performance decrease. Why? bottleneck. Because the game relays on high performance on your cores and any multi core game is because mostly the first 2 cores are doing most of the tasks ( first core actually ), the usage of your entire cpu doesn't mean much as result as if you double the cores when only 4 cores are requested u will get half the usage but the same bottleneck applies because frequency matters a lot.

The CPU in the PS5 can be used for 55% and be bottlenecked without effort while 45% is doing nothing. This is why people on PC platform spend tons of money to get highest clocks they can on the CPU front.

Also why do you think they clocked those cores so high to start with? yea that's why.

So before you start to quote me and laugh, actually know your material.

Avatar image for Gatygun
Gatygun

1968

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#333  Edited By Gatygun
Member since 2010 • 1968 Posts
@tormentos said:
@Gatygun said:

1) Ah u still believe in clever programming the good old "coding to the metal"

U do realize those boxes run api's right? why do you think xbox can port there games over without effort towards the xbox series x on day one? because it runs a windows core. Why do you think u think 3,5gb of ram on the PS4 is reserved and 1-2 cores? because it runs a api on the background with all kinds of checks etc etc. Ever heard about vulkan / dx12? that nukes overhead on PC down to a minimum?

This aint ps2/ps3 area anymore mate, get in contact with the times.

2) i5 760 scores the same score actually lower than the PS4 cpu r15 u can see here what that results into.

Console "coding to the metal" has been debunked now for ages because we are not living in the PS2/PS3 area anymore. So there you go again.

Ok this is the only thing i will address because your post from the start show you KNOW shit about this.

Consoles don't run on API they run on OS,the xbox uses a windows varian,the PS4 uses Freebsd,in the case of the xbox those games run on a virtual machine which is why it is easier for MS,on sony side is more complicated because sony used low level tools to make those games,and there is less abstraction.

You can't run a 1.6ghz jaguar on PC and run the games the PS4 runs you simply can't.

API are tools to make games not OS,DX12 is MS API,GNM is sony's api the console don't run on those,those are tools to make games.

And a i5 is much better CPU than a jaguar.

I dunno why i said API there to be honest was probably late as it makes no sense in that line at all, its clear i mentioned OS and then moved to api again for whatever reason. Probably should not have posted it and reread it. Oh well good point there tho.

Anyway my point clearly was it has a layer of software running at all times vs coding to the metal on that box. Which hasn't been relevant for a while now. The api part after it was meant to showcase that overhead on PC was not much of a thing anymore.

About your other point wouldn't make much sense to use a jaguar in PC because PC games are not optimized for a jaguar like CPU solution. High clocks low cores is what PC uses vs low clocks high cores on PS4. Games are builded for that and with next gen that's going to exactly happen also now.

However if you want to compare u look at the performance of both CPU's or architectures and compare. Which results in exactly what u would expect, the PS4 doesn't have the edge and even if it has its absolutely minor to the point of non existent. Because that i5 760 is basically PS4 pro cpu performance wise, seems like pro has a faster CPU i guess then the base never really looked at it or cared to check.

Anyway even downclocking and sacrificing more performance to take care of that one locked core away and u will sit around the same performance at the end.

Here's the comparison.

i5 760 4/4 core scores 343 points cpu performance

PS4 pro 8/8 core scores 343 points cpu performance

Seems legit to me.

Anyway this all goes way over the point i wanted to make with simple stating that coding to the metal isn't a thing anymore on those consoles and cpu performance wise PS4 really isn't doing magic.

Avatar image for i_p_daily
I_P_Daily

16140

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#334 I_P_Daily
Member since 2015 • 16140 Posts

@tormentos said:
@Pedro said:
@tormentos said:

in the case of the xbox those games run on a virtual machine which is why it is easier for MS,

Where are you getting this information from? Games running on a virtual machine? WHAT?

This is old man i was going to quote the data but i see other posters already did.

@i_p_daily said:
@Sagemode87 said:

Lmao at Xbox fanboys thinking there's going to be a difference. When launching at the same time for the same price, Sony owns the market. Xbox is a kitten compared to Playstation. Devs will prioritize Playstation and won't bother upsetting the larger fanbase. There's not much extra you can do with a 16 percent power increase with the way games will be looking next gen.

And some of you cows said that R* would gimp the X version of RDR2 because Sony had marketing rights, how well did that work out for you? something like this lol.

Oh and lets forget all the other games that took advantage of the X over the PRO, even though Sony has had the biggest userbase this gen.

Better think up a new excuse, this one has been debunked LOL.

I think you mixed the images,the first one is the PS4 and the second is the xbox one which is the worse by far version and the version which the GREAT GREAT majority of xbox owners played.

Just because the xbox one X exist that doesn't erase that the majority of xbox one owners by a LONG shot are S and original model owners no X model.

Fact is most xbox one owners get an inferior experience those who get a better one by far on PS side outnumber those on the xbox side.

That's not how this works tormy and you know it, and your damage control is shitty at best.

I quoted a fellow cow (I've even puit it in bold for you to better understand) who said devs would prioritise the PlayStation over the Xbox because the won't want to upset the larger userbase, and RDR2 proved him wrong. Stop trying to change the narrative, and try and argue how my facts are wrong.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#335 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

@tormentos:

Within 5GB memory storage, PS4's GPU TMU already operates at optimal fetch rates from GDDR5 fool.

For optimal TMU fetch rate for XBO GPU, a tiled resource is a requirement to feed texture data from DDR3 to 32 ESRAM ahead of actual TMU consumption.

PS4 has less need for tile resource workaround when compared XBO due to a larger GDDR5 memory pool.

You're a fuk!ng idiot.

OK.

WTF does that first bold part has to do with ANYTHING I SAY ON THAT POST.

2-That isn't my point you walking pancake,my point is simple MS called PRT tiled resources they changed the name of the damn feature in order to make it look like something else,i remember here how many claimed the PS4 didn't have tile resources when in reality it did it was call PRT,and try pass it as something that would increse the xbox power.

This is the reason why you quote me some lemmings clowns try to pass Tile Resources as something that would transform the xbox into a beast or change anything.

You try to imply that Tile resources would work better on xbox one because it has 2 memory pools,when in reality it was a GCN feature on AMD GPU which only have 1 memory pool not 2.

I call you for it.

So tell me Rondementia how was Tile resources on xbox better how did it change anything?

The few games that use the feature were superior on PS4.

My PRT argument didn't remove PS4's PRT features. LOL

To maximize TMU fetch rates, XBO needs to use both 32 MB eSRAM and PRT, but this workaround will NOT solve CU's ALU bound issue.

This is why I use W5000 example to stimulate a GCN with ~1.3 TFLOPS with 7850's 2GB ~176GB/s memory bandwidth which fakes the ideal PRT + 32 MB ESRAM workaround situation. I placed a cap on XBO's ideal expectations.

PS4's 5 GB of fast memory already delivers optimal TMU fetch rates. LOL. A programmer can brute force workloads on PS4 longer than XBO which needs extensive optimizations nearly from the start.

PRT does NOT exceed GPU's texture fetch rate potential from it's fastest external memory pool.

This is why XBO's game results can vary between 720p to 900p while PS4 has 1080p since programmers have different abilities.

I don't prescribe with your extremist PS4 viewpoints. You can't handle a platform participant outside of the XBO vs PS4 debate.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#336 Uruz7laevatein
Member since 2009 • 133 Posts
@Gatygun said:
@Uruz7laevatein said:

@Gatygun: LOL , So what you are saying is an 8-core Zen2 is a now a bottleneck or whatever that means when a console has it? Holy phook, that's news to me ( I mean do people even know what a bottleneck is). The worst case scenario in gaming for me was when my Ryzen-3700X (before I swapped it out for an Ryzen 9-3950X)hits 20% usage at worse on unoptimized PC games(where overhead is much higher). The only time a CPU pegged hard is when the FPS is uncapped beyond 60 FPS (when GPU is not a bottleneck), or running video-encoding/compilation-software/etc.

Look if you don't know how technology works pls don't react.

Every CPU is bottlenecked and every CPU will always be bottlenecked because they have fixed performance. Go play anno 1800 on your 3700x and 3950x and compare them, your fps stays exactly the same as the same clocks.

Now clock that 3950x below the 3700x and see your performance decrease. Why? bottleneck. Because the game relays on high performance on your cores and any multi core game is because mostly the first 2 cores are doing most of the tasks ( first core actually ), the usage of your entire cpu doesn't mean much as result as if you double the cores when only 4 cores are requested u will get half the usage but the same bottleneck applies because frequency matters a lot.

The CPU in the PS5 can be used for 55% and be bottlenecked without effort while 45% is doing nothing. This is why people on PC platform spend tons of money to get highest clocks they can on the CPU front.

Also why do you think they clocked those cores so high to start with? yea that's why.

So before you start to quote me and laugh, actually know your material.

Uh huh huh

Oh noes every CPU is a bottleneck, what in the world will I ever do. Did you know the water is wet and fire burns? Oh noes my 3950X stays at same fps at 10% CPU usage despite being so buttery smooth, oh my bottleneck.

Translation: Oh noes game only uses a few threads and the rest of CPU is idling. Dayum you idiot engineers at AMD/Intel, for designing such poor CPUs that don't run at 10Ghz, I Gatygun with my armchair leggo building expertise exceeds yours.

Translation: Oh noes AMD/Sony/MS you gaiz doesn't have a clue on designing hardware, I Gatygun who assembled leggos is more knowledgeable than you at hardware and compilers engineering cuz I overlock muh 9900K for muh epeens.

Oh noes how dare I quote you, I had a good laugh at such grandeur of delusions.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#337  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:
@Gatygun said:

Look if you don't know how technology works pls don't react.

Every CPU is bottlenecked and every CPU will always be bottlenecked because they have fixed performance. Go play anno 1800 on your 3700x and 3950x and compare them, your fps stays exactly the same as the same clocks.

Now clock that 3950x below the 3700x and see your performance decrease. Why? bottleneck. Because the game relays on high performance on your cores and any multi core game is because mostly the first 2 cores are doing most of the tasks ( first core actually ), the usage of your entire cpu doesn't mean much as result as if you double the cores when only 4 cores are requested u will get half the usage but the same bottleneck applies because frequency matters a lot.

The CPU in the PS5 can be used for 55% and be bottlenecked without effort while 45% is doing nothing. This is why people on PC platform spend tons of money to get highest clocks they can on the CPU front.

Also why do you think they clocked those cores so high to start with? yea that's why.

So before you start to quote me and laugh, actually know your material.

Uh huh huh

Oh noes every CPU is a bottleneck, what in the world will I ever do. Did you know the water is wet and fire burns? Oh noes my 3950X stays at same fps at 10% CPU usage despite being so buttery smooth, oh my bottleneck.

Translation: Oh noes game only uses a few threads and the rest of CPU is idling. Dayum you idiot engineers at AMD/Intel, for designing such poor CPUs that don't run at 10Ghz, I Gatygun with my armchair leggo building expertise exceeds yours.

Translation: Oh noes AMD/Sony/MS you gaiz doesn't have a clue on designing hardware, I Gatygun who assembled leggos is more knowledgeable than you at hardware and compilers engineering cuz I overlock muh 9900K for muh epeens.

Oh noes how dare I quote you, I had a good laugh at such grandeur of delusions.

Well, Sony's PS5 recycled NAVI 10's 448 GB/s memory bandwidth while AMD added RT cores and PC CPU memory bandwidth consumers e.g. 60 GB/s from 128bit DDR4-3800.

Sony delivered memory bandwidth gimped PS4 Pro, hence I'm not surprised with PS5.

Sony: brain dead selected R7-265's memory bandwidth.

Sony: brain dead selected RX-470's memory bandwidth.

Sony: brain dead selected RX-5700's memory bandwidth.

There's a pattern.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#338  Edited By Uruz7laevatein
Member since 2009 • 133 Posts
@ronvalencia said:

Well, Sony's PS5 recycled NAVI 10's 448 GB/s memory bandwidth while AMD added RT cores and PC CPU memory bandwidth consumers e.g. 60 GB/s from 128bit DDR4-3800.

Sony delivered memory bandwidth gimped PS4 Pro, hence I'm not surprised with PS5.

Sony: brain dead selected R7-265's memory bandwidth.

Sony: brain dead selected RX-470's memory bandwidth.

Sony: brain dead selected RX-5700's memory bandwidth.

There's a pattern.

1. Delivering any higher bandwidth involves expanding the memory bus, it also involves expanding the CUs/Geometry Engines/blah blah in order to really see a worth improvement in GPU power all of which adds cost, not to mention requires a significant uplift in CPU power (only made feasible by Zen/Zen2). Of course doing so would can also fragment the user-base (which goes against the a half-gen leap). The main reasons for the Pro to exist (aside from VR/HDR) was simply it was a better ROI to have a midgen-upgrade on 16nm along with a slim down base model than just having a PS4-slim.

2. Versus what, brain smart selected , a slower 32 MB ESRAM (which took up precious silicon budget in order to permit slower main memory) and a slower DDR3 (which would of made sense if Xbone was solely an always-online/DRM Windows multimedia-box, that utilizes cloud for everything (which assumes a minimum of Google fiber to really run)) that performs worse than a simple faster unified GDDR5 that was as fast as it goes without breaking the bank for the time(which alot of developers didn't expect or that it was "logistically impossible for a console to 8 GB of RAM at the time" said by some armchair "experts".

3. Which at the time was the best configuration relative to cost and yields, and performance per dollar. Much like number one it's simply not worth any higher without improving the rest of the hardware (both the CPU/GPU pipeline). The X1X only had a faster memory speed was due to coming out nearly 2 years later (that's not much to brag about).

4. Getting higher memory speeds involves expanding the memory bus and using higher clocked memory, which also involves expanding CUs, it also involves adding more Geometry Engines, and maybe even more ROPs (ideally a tighter ratio between various hardware blocks on the GPU for a gaming console to minimize diminishing returns). In the ideal world, they could of had a GPU with 54 CUs, with double Geometry Engines and a 92 ROPs via a 24GB GDDR6 on a unified 384-bit bus @ 768 GB/s, with the GPU clocked at 2 Ghz. I'm pretty sure the engineers thought about these things (given all the "rumored" devkit configurations) and decided it wasn't worth the hassle of making the console $600 minimum (a big enough leap to make it a "worthwhile" over the current configuration in terms of ROI).

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#339  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:
@ronvalencia said:

Well, Sony's PS5 recycled NAVI 10's 448 GB/s memory bandwidth while AMD added RT cores and PC CPU memory bandwidth consumers e.g. 60 GB/s from 128bit DDR4-3800.

Sony delivered memory bandwidth gimped PS4 Pro, hence I'm not surprised with PS5.

Sony: brain dead selected R7-265's memory bandwidth.

Sony: brain dead selected RX-470's memory bandwidth.

Sony: brain dead selected RX-5700's memory bandwidth.

There's a pattern.

1. Delivering any higher bandwidth involves expanding the memory bus, it also involves expanding the CUs/Geometry Engines/blah blah in order to really see a worth improvement in GPU power all of which adds cost, not to mention requires a significant uplift in CPU power (only made feasible by Zen/Zen2). Of course doing so would can also fragment the user-base (which goes against the a half-gen leap). The main reasons for the Pro to exist (aside from VR/HDR) was simply it was a better ROI to have a midgen-upgrade on 16nm along with a slim down base model than just having a PS4-slim.

2. Versus what, brain smart selected , a slower 32 MB ESRAM (which took up precious silicon budget in order to permit slower main memory) and a slower DDR3 (which would of made sense if Xbone was solely an always-online/DRM Windows multimedia-box, that utilizes cloud for everything (which assumes a minimum of Google fiber to really run)) that performs worse than a simple faster unified GDDR5 that was as fast as it goes without breaking the bank for the time(which alot of developers didn't expect or that it was "logistically impossible for a console to 8 GB of RAM at the time" said by some armchair "experts".

3. Which at the time was the best configuration relative to cost and yields, and performance per dollar. Much like number one it's simply not worth any higher without improving the rest of the hardware (both the CPU/GPU pipeline). The X1X only had a faster memory speed was due to coming out nearly 2 years later (that's not much to brag about).

4. Getting higher memory speeds involves expanding the memory bus and using higher clocked memory, which also involves expanding CUs, it also involves adding more Geometry Engines, and maybe even more ROPs (ideally a tighter ratio between various hardware blocks on the GPU for a gaming console to minimize diminishing returns). In the ideal world, they could of had a GPU with 54 CUs, with double Geometry Engines and a 92 ROPs via a 24GB GDDR6 on a unified 384-bit bus @ 768 GB/s, with the GPU clocked at 2 Ghz. I'm pretty sure the engineers thought about these things (given all the "rumored" devkit configurations) and decided it wasn't worth the hassle of making the console $600 minimum (a big enough leap to make it a "worthwhile" over the current configuration in terms of ROI).

1. Reminder,

PS4 Pro GPU design has 40 CU.

XSX GPU design has 44 CU with ROPS connected to 2MB render cache, hence decoupled 384-bit bus. Baseline Polaris IP doesn't have X1X GPU's ROPS linked 2MB render cache.

RX 5600 XT 36 CU still has 64 ROPS despite a 192-bit bus.

RDNA ROPS (aka RBs) is linked to L1 and L2 cache hierarchy, hence NAVI 10 can tolerate missing GDDR6 chips e.g. RX 5600 XT.

For RDNA v1's ROPS design

FYI, Vega ROPS is linked to L2 cache.

NAVI's "SOC Fabric" can be expanded to support other memory controller configuration e.g. XSX's 320-bit bus.

PS; NVIDIA has "DCC Everywhere" since Pascal.

2. XBO will NOT be repeated. X1X does exist btw.

3. Sony is consistent with its 256bit bus PCB selection since PS4.

4. Review RDNA v1's cache hierarchy before making any comments on this matter.

RDNA architecture is more scalable when compared to GCN.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#340 Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia:

1. So what? One came out after the other and is still affected by CPU limitations? Oh an RX5600 XT only existed in never-land at the time. It's pretty much a red-herring.

2. So what , and that was what happened....., the configuration for PS4 was best choice given all the variables at the time...

3. Which indicates that was best choice at the given all the variables accounted for....

4. So what?, it's makes more sense to scale hardware proportionally to see a worth while gain, otherwise we can just go "Mr. Raja-Koduri" TFLOPs and neglect everything else.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#341 ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia:

1. So what? One came out after the other and is still affected by CPU limitations? Oh an RX5600 XT only existed in never-land at the time. It's pretty much a red-herring.

2. So what , and that was what happened....., the configuration for PS4 was best choice given all the variables at the time...

3. Which indicates that was best choice at the given all the variables accounted for....

4. So what?, it's makes more sense to scale hardware proportionally to see a worth while gain, otherwise we can just go "Mr. Raja-Koduri" TFLOPs and neglect everything else.

Again, Sony delivered memory bandwidth gimped PS4 Pro, hence I'm not surprised with PS5.

1. Sony: brain dead selected R7-265's memory bandwidth.

Crossbar equipped GCN has existed with Tahiti GCN's 384-bit bus.

Tahiti GCN's 7870 XT SKU has 256 bus config.

Tahiti GCN's 7950/7970 SKUs has 384-bit bus config.

-----

2. Sony: brain dead selected RX-470's memory bandwidth.

Tahiti GCN *exist*. It's on AMD's IP menu.

-----

3. Sony: brain dead selected RX-5700's memory bandwidth.

RX 5600 XT exists as NAVI 10 (RDNA v1) which shows memory controller config flexibility.

Sony's PS5 recycled NAVI 10's 448 GB/s memory bandwidth while AMD added RT cores and PC CPU memory bandwidth consumers e.g. 60 GB/s from 128bit DDR4-3800.

There's a pattern.

4. Raja-Koduri's TFLOPS PR is BS (relative to NVIDIA's Maxwell/Pascal), but XSX GPU is showing TFLOPS scaling with Gears 5 i.e. RDNA 2 is not GCN.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#342  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia:

1: Gasp, Tahiti's 384-bit bus would have been out of reach in budget/power-consumption. PS4 uses a 8 GB GDDR5 on a 256-bit bus @176GB/s which is still higher than 7850/7870. The other option was to use higher clocked GDDR5 modules similar to GTX680 which would have been out of reach.

2. Tahiti's GCN isn't optimized for TSMC 16nm FinFET, GCN1 would have limited PS4 to 28nm and would of drawn too much power and heat, it was more cost efficient to use the upcoming Polaris GCN optimized for 16nm. Getting higher bandwidth would of involve using a larger memory bus or higher clocked GDDR5.

3. RX5700XT bandwidth is limited by bus without being more expensive clocked GDDR6, it also involves going past 16GB GDDR6 unless one wants to go with awesome split ram. Besides a unified memory architecture is more memory efficient, and most CPUs memory acess/bandwidth is ~90-95% hovering around the L1/L2/L3 caches.

4. Gasp....So Raja Koduri's indeed correct all of a sudden...

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#343 ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia:

1: Gasp, Tahiti's 384-bit bus would have been out of reach in budget/power-consumption. PS4 uses a 8 GB GDDR5 on a 256-bit bus @176GB/s which is still higher than 7850/7870. The other option was to use higher clocked GDDR5 modules similar to GTX680 which would have been out of reach.

2. Tahiti's GCN isn't optimized for TSMC 16nm FinFET, GCN1 would have limited PS4 to 28nm and would of drawn too much power and heat, it was more cost efficient to use the upcoming Polaris GCN optimized for 16nm. Getting higher bandwidth would of involve using a larger memory bus or higher clocked GDDR5.

3. RX5700XT bandwidth is limited by bus without being more expensive clocked GDDR6, it also involves going past 16GB GDDR6 unless one wants to go with awesome split ram. Besides a unified memory architecture is more memory efficient, and most CPUs memory acess/bandwidth is ~90-95% hovering around the L1/L2/L3 caches.

4. Gasp....So Raja Koduri's indeed correct all of a sudden...

1. For November 2013 SKUs,

R7-265 has 179.2 GB/s and 1.894 TFLOPS.

R9-270 has 179.2 GB/s and 2.368 TFLOPS.

R9-270X has 179.2 GB/s and 2.688 TFLOPS.

7850 and 7870 released in 2012 year SKUs.

384-bit bus design was recycled for X1X.

2. Red herring, 384-bit bus design was recycled for X1X.

3. https://www.techspot.com/review/1891-ryzen-memory-performance-scaling/

On AMD Ryzen Zen 2 CPUs, memory bandwidth or/and latency have a higher impact.

From https://www.gamersnexus.net/guides/3508-ryzen-3000-memory-benchmark-best-ram-fclk-uclock-mclock

Note why I picked DDR4-3800 60 GB/s) as an example.

4. Raja Koduri's Vega TFLOPS has higher instruction retirement latency.

Read https://www.reddit.com/r/Amd/comments/ctfbem/amd_rdna_whitepaper/

Figure 3 (bottom of page 5) shows 4 lines of shader instructions being executed in GCN, vs RDNA in Wave32 or “backwards compatible” Wave64.

Vega takes 12 cycles to complete the instruction on a GCN SIMD. Navi in Wave32 (optimized code) completes it in 7 cycles.

In backwards compatible (optimized for GCN Wave64) mode, Navi completes it in 8 cycles.

So even on code optimized for GCN, Navi is faster., but more performance can be extracted by optimizing for Navi.

Lower latency, and no wasted clock cycles.

For GCN wave64 instructions, RDNA v1 has about 33 percent higher TFLOPS efficiency.

RX-5700 XT's 9.66 TFLOPS average is effectively 12.8478‬ TFLOPS Vega GCN and Radeon VII has 14 TFLOPS, hence the major reason for RX-5700XT's performance being very close to Radeon VII.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#344 Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia:

1.Gasp, You mean GPU with nearly similar bandwidth, but much higher GPU clock-speed

2. Gasp, Oh you mean the X1X coming nearly 2 years later with 12 GB GDDR5.

3. Which pertains mainly to the CPU memory controller and the "tighter" timings on binned RAM. If anything the penalty would be alot less on a unified memory architecture who GDDR6 modules are soldered onto the board and much closer to the SoC (electrons are limited to speed of light due to physics).

4.Which means that Navi/RDNA2 would benefit even more from higher clocks, and in order to really get a massive leap (around 50% or higher) worth while over the current PS5 configuration (due to it's high perf/mm^2 in classic GPU workloads) would be a "butterfly" configuration.

Avatar image for briguyb13
briguyb13

3855

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#345 briguyb13
Member since 2007 • 3855 Posts

So articles are coming out saying PS5 TF are at 8.4 and overclocked to do 10.3. Wow...

Avatar image for SecretPolice
SecretPolice

35913

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#346 SecretPolice
Member since 2007 • 35913 Posts

Captain Ron and company....

...beating the snot outta the alts like there redheaded stepchildren. lol Good stuff. :P

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#347 Pedro  Online
Member since 2002 • 39100 Posts

@briguyb13 said:

So articles are coming out saying PS5 TF are at 8.4 and overclocked to do 10.3. Wow...

And what magical articles are these?

Avatar image for briguyb13
briguyb13

3855

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#348  Edited By briguyb13
Member since 2007 • 3855 Posts

@Pedro said:
@briguyb13 said:

So articles are coming out saying PS5 TF are at 8.4 and overclocked to do 10.3. Wow...

And what magical articles are these?

https://www.notebookcheck.net/No-the-PS5-won-t-offer-anywhere-near-the-graphics-performance-of-Xbox-Series-X-Navi-benchmarks-prove-it.458625.0.html

Avatar image for Pedro
Pedro

39100

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#349 Pedro  Online
Member since 2002 • 39100 Posts

@briguyb13: Man, that is just an opinion piece just like the pro playstation articles that state its faster than the Series X.

Avatar image for briguyb13
briguyb13

3855

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#350 briguyb13
Member since 2007 • 3855 Posts

@Pedro said:

@briguyb13: Man, that is just an opinion piece just like the pro playstation articles that state its faster than the Series X.

I'm not making the claim, just stating that I saw it online. Make of it what you will.